Efficient Execution of OpenMP on GPUs
Oak Ridge National Laboratory, Oak Ridge, USA
International Symposium on Code Generation and Optimization (CGO), 2022
@inproceedings{huber2022efficient,
title={Efficient Execution of OpenMP on GPUs},
author={Huber, Joseph and Cornelius, Melanie and Georgakoudis, Giorgis and Tian, Shilei and Diaz, Jose M Monsalve and Dinel, Kuter and Chapman, Barbara and Doerfert, Johannes},
booktitle={2022 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)},
pages={41–52},
year={2022},
organization={IEEE}
}
OpenMP is the preferred choice for CPU parallelism in High-Performance-Computing (HPC) applications written in C, C++, or Fortran. As HPC systems became heterogeneous, OpenMP introduced support for accelerator offloading via the target directive. This allowed porting existing (CPU) code onto GPUs, including well established CPU parallelism paradigms. However, there are architectural differences between CPU and GPU execution which make common patterns, like forking and joining threads, single threaded execution, or sharing of local (stack) variables, in general costly on the latter. So far it was left to the user to identify and avoid non-efficient code patterns, most commonly by writing their OpenMP offloading codes in a kernel-language style which resembles CUDA more than it does traditional OpenMP. In this work we present OpenMP-aware program analyses and optimizations that allow efficient execution of the generic, CPUcentric parallelism model provided by OpenMP on GPUs. Our implementation in LLVM/Clang maps various common OpenMP patterns found in real world applications efficiently to the GPU. As static analysis is inherently limited we provide actionable and informative feedback to the user about the performed and missed optimizations, together with ways for the user to annotate the program for better results. Our extensive evaluation using several HPC proxy applications shows significantly improved GPU kernel times and reduction in resources requirements, such as GPU registers.
May 1, 2022 by hgpu