CnC-CUDA: declarative programming for GPUs

Max Grossman, Alina Simion Sbirlea, Zoran Budimlic, Vivek Sarkar
Department of Computer Science, Rice University
Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, 2011, Volume 6548/2011, 230-245


   title={CnC-CUDA: declarative programming for GPUs},

   author={Grossman, M. and Simion Sb{^i}rlea, A. and Budimli{‘c}, Z. and Sarkar, V.},

   journal={Languages and Compilers for Parallel Computing},





Download Download (PDF)   View View   Source Source   



The computer industry is at a major inflection point in its hardware roadmap due to the end of a decades-long trend of exponentially increasing clock frequencies. Instead, future computer systems are expected to be built using homogeneous and heterogeneous many-core processors with 10’s to 100’s of cores per chip, and complex hardware designs to address the challenges of concurrency, energy efficiency and resiliency. Unlike previous generations of hardware evolution, this shift towards many-core computing will have a profound impact on software. These software challenges are further compounded by the need to enable parallelism in workloads and application domains that traditionally did not have to worry about multiprocessor parallelism in the past. A recent trend in mainstream desktop systems is the use of graphics processor units (GPUs) to obtain order-of-magnitude performance improvements relative to general-purpose CPUs. Unfortunately, hybrid programming models that support multithreaded execution on CPUs in parallel with CUDA execution on GPUs prove to be too complex for use by mainstream programmers and domain experts, especially when targeting platforms with multiple CPU cores and multiple GPU devices.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: