16051

Compiler-Assisted Workload Consolidation For Efficient Dynamic Parallelism on GPU

Hancheng Wu, Da Li, Michela Becchi
Dept. of Electrical and Computer Engineering, University of Missouri, Columbia, MO, USA
arXiv:1606.08150 [cs.DC], (27 Jun 2016)

@article{wu2016compilerassisted,

   title={Compiler-Assisted Workload Consolidation For Efficient Dynamic Parallelism on GPU},

   author={Wu, Hancheng and Li, Da and Becchi, Michela},

   year={2016},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1362

views

GPUs have been widely used to accelerate computations exhibiting simple patterns of parallelism – such as flat or two-level parallelism – and a degree of parallelism that can be statically determined based on the size of the input dataset. However, the effective use of GPUs for algorithms exhibiting complex patterns of parallelism, possibly known only at runtime, is still an open problem. Recently, Nvidia has introduced Dynamic Parallelism (DP) in its GPUs. By making it possible to launch kernels directly from GPU threads, this feature enables nested parallelism at runtime. However, the effective use of DP must still be understood: a naive use of this feature may suffer from significant runtime overhead and lead to GPU underutilization, resulting in poor performance. In this work, we target this problem. First, we demonstrate how a naive use of DP can result in poor performance. Second, we propose three workload consolidation schemes to improve performance and hardware utilization of DP-based codes, and we implement these code transformations in a directive-based compiler. Finally, we evaluate our framework on two categories of applications: algorithms including irregular loops and algorithms exhibiting parallel recursion. Our experiments show that our approach significantly reduces runtime overhead and improves GPU utilization, leading to speedup factors from 90x to 3300x over basic DP-based solutions and speedups from 2x to 6x over flat implementations.
Rating: 2.5/5. From 6 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: