6430

Dynamic Task Parallelism with a GPU Work-Stealing Runtime System

Sanjay Chatterjee, Max Grossman, Alina Sbirlea, Vivek Sarkar
Department of Computer Science, Rice University
Workshop on Languages and Compilers for Parallel Computing (LCPC), 2011

@article{chatterjee2011dynamic,

   title={Dynamic Task Parallelism with a GPU Work-Stealing Runtime System},

   author={Chatterjee, S. and Grossman, M. and Sb{i}rlea, A. and Sarkar, V.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

1966

views

NVIDIA’s Compute Unified Device Architecture (CUDA) and its attached C/C++ based API went a long way towards making GPUs more accessible to mainstream programming. So far, the use of GPUs for high performance computing has been primarily restricted to data parallel applications, and with good reason. The high number of computational cores and high memory bandwidth supported by the device makes it an ideal candidate for such applications. However, its potential for executing applications that may combine dynamic task parallelism with data parallelism has not yet been explored in detail, in part because CUDA does not provide a viable interface for creating dynamic tasks and handling load balancing issues. Any support for dynamic task parallelism has to be orchestrated entirely by the CUDA programmer today. In this work we extend CUDA by implementing a work stealing runtime on the GPU. We introduce a finish-async style API to GPU device programming with the aim of executing irregular applications efficiently across multiple shared multiprocessors (SM) in a GPU device without sacrificing the performance of regular data-parallel applications within an SM. We present the design of our new intra-device inter-SM work-stealing runtime system and compare it to past work on GPU runtimes, as well as performance evaluations comparing execution using our runtime to direct execution on the device. Finally, we show how this runtime can be targeted by extensions to the high-level CnC-CUDA programming model introduced in past work.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: