Automated architecture-aware mapping of streaming applications onto GPUs
School of Computing, National University of Singapore
IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2011
@inproceedings{hagiescu2011automated,
title={Automated architecture-aware mapping of streaming applications onto GPUs},
author={Hagiescu, A. and Huynh, H.P. and Wong, W.F. and Goh, R.S.M.},
booktitle={Parallel & Distributed Processing Symposium (IPDPS), 2011 IEEE International},
pages={467–478},
year={2011},
organization={IEEE}
}
Graphic Processing Units (GPUs) are made up of many streaming multiprocessors, each consisting of processing cores that interleave the execution of a large number of threads. Groups of threads – called warps and wave fronts, respectively, in nVidia and AMD literature – are selected by the hardware scheduler and executed in lockstep on the available cores. If threads in such a group access the slow off-chip global memory, the entire group has to be stalled, and another group is scheduled instead. The utilization of a given multiprocessor will remain high if there is a sufficient number of alternative thread groups to select from. Many parallel general purpose applications have been efficiently mapped to GPUs. Unfortunately, many stream processing applications exhibit unfavorable data movement patterns and low computation-to-communication ratio that may lead to poor performance. In this paper, we describe an automated compilation flow that maps most stream processing applications onto GPUs by taking into consideration two important architectural features of nVidia GPUs, namely interleaved execution as well as the small amount of shared memory available in each streaming multiprocessors. In particular, we show that using a small number of compute threads such that the memory footprint is reduced, we can achieve high utilization of the GPU cores. Our scheme goes against the conventional wisdom of GPU programming which is to use a large number of homogeneous threads. Instead, it uses a mix of compute and memory access threads, together with a carefully crafted schedule that exploits parallelism in the streaming application, while maximizing the effectiveness of the unique memory hierarchy. We have implemented our scheme in the compiler of the Stream It programming language, and our results show a significant speedup compared to the state-of-the-art solutions.
November 23, 2011 by hgpu