Automatic generation of warp-level primitives and atomic instructions for fast and portable parallel reduction on GPUs
CS and Coordinated Science Lab, UIUC
International Symposium on Code Generation and Optimization (CGO), 2019
@inproceedings{de2019automatic,
title={Automatic generation of warp-level primitives and atomic instructions for fast and portable parallel reduction on GPUs},
author={De Gonzalo, Simon Garcia and Huang, Sitao and G{‘o}mez-Luna, Juan and Hammond, Simon and Mutlu, Onur and Hwu, Wen-mei},
booktitle={Proceedings of the 2019 IEEE/ACM International Symposium on Code Generation and Optimization},
pages={73–84},
year={2019},
organization={IEEE Press}
}
Since the advent of GPU computing, GPU hardware has evolved at a fast pace. Since application performance heavily depends on the latest hardware improvements, performance portability is extremely challenging for GPU application library developers. Portability becomes even more difficult when new low-level instructions are added to the ISA (e.g., warp shuffle instructions) or the microarchitectural support for existing instructions is improved (e.g., atomic instructions). Library developers, besides re-tuning the code for new hardware features, deal with the performance portability issue by hand-writing multiple algorithm versions that leverage different instruction sets and microarchitectures. High-level programming frameworks and Domain Specific Languages (DSLs) do not typically support lowlevel instructions (e.g., warp shuffle and atomic instructions), so it is painful or even impossible for these programming systems to take advantage of the latest architectural improvements. In this work, we design a new set of high-level APIs and qualifiers, as well as specialized Abstract Syntax Tree (AST) transformations for high-level programming languages and DSLs. Our transformations enable warp shuffle instructions and atomic instructions (on global and shared memories) to be easily generated. We show a practical implementation of these transformations by building on Tangram, a high-level kernel synthesis framework. Using our new language and compiler extensions, we implement parallel reduction, a fundamental building block used in a wide range of algorithms. Parallel reduction is representative of the performance portability challenge, as its performance heavily depends on the latest hardware improvements. We compare our synthesized parallel reduction to another high-level programming framework and a hand-written high-performance library across three generations of GPU architectures, and show up to 7.8x speedup (2x on average) over hand-written code.
May 26, 2019 by hgpu