14856

Free Launch: Optimizing GPU Dynamic Kernel Launches through Thread Reuse

Guoyang Chen, Xipeng Shen
Computer Science Department, North Carolina State University, 890 Oval Drive, Raleigh, NC, USA 27695
The 48th Annual IEEE/ACM International Symposium on Microarchitecture, 2015

@article{chen2015free,

   title={Free Launch: Optimizing GPU Dynamic Kernel Launches through Thread Reuse},

   author={Chen, Guoyang and Shen, Xipeng},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

571

views

Supporting dynamic parallelism is important for GPU to benefit a broad range of applications. There are currently two fundamental ways for programs to exploit dynamic parallelism on GPU: a software-based approach with software-managed worklists, and a hardware-based approach through dynamic subkernel launches. Neither is satisfactory. The former is complicated to program and is often subject to some load imbalance; the latter suffers large runtime overhead. In this work, we propose free launch, a new software approach to overcoming the shortcomings of both methods. It allows programmers to use subkernel launches to express dynamic parallelism. It employs a novel compiler-based code transformation named subkernel launch removal to replace the subkernel launches with the reuse of parent threads. Coupled with an adaptive task assignment mechanism, the transformation reassigns the tasks in the subkernels to the parent threads with a good load balance. The technique requires no hardware extensions, immediately deployable on existing GPUs. It keeps the programming convenience of the subkernel launch-based approach while avoiding its large runtime overhead. Meanwhile, its superior load balancing makes it outperform manual worklist-based techniques by 3X on average.
VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
Free Launch: Optimizing GPU Dynamic Kernel Launches through Thread Reuse, 5.0 out of 5 based on 1 rating

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: