CnC-CUDA: declarative programming for GPUs
Department of Computer Science, Rice University
Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, 2011, Volume 6548/2011, 230-245
@article{grossman2011cnc,
title={CnC-CUDA: declarative programming for GPUs},
author={Grossman, M. and Simion Sb{^i}rlea, A. and Budimli{‘c}, Z. and Sarkar, V.},
journal={Languages and Compilers for Parallel Computing},
pages={230–245},
year={2011},
publisher={Springer}
}
The computer industry is at a major inflection point in its hardware roadmap due to the end of a decades-long trend of exponentially increasing clock frequencies. Instead, future computer systems are expected to be built using homogeneous and heterogeneous many-core processors with 10’s to 100’s of cores per chip, and complex hardware designs to address the challenges of concurrency, energy efficiency and resiliency. Unlike previous generations of hardware evolution, this shift towards many-core computing will have a profound impact on software. These software challenges are further compounded by the need to enable parallelism in workloads and application domains that traditionally did not have to worry about multiprocessor parallelism in the past. A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. Unfortunately, hybrid programming models that support multithreaded execution on CPUs in parallel with CUDA execution on GPUs prove to be too complex for use by mainstream programmers and domain experts, especially when targeting platforms with multiple CPU cores and multiple GPU devices.
August 24, 2011 by hgpu