25212

Experience Report: Writing A Portable GPU Runtime with OpenMP 5.1

Shilei Tian, Jon Chesterfield, Johannes Doerfert, Barbara Chapman
Department of Computer Science, Stony Brook University, USA
arXiv:2106.03219 [cs.DC], (6 Jun 2021)

@misc{tian2021experience,

   title={Experience Report: Writing A Portable GPU Runtime with OpenMP 5.1},

   author={Shilei Tian and Jon Chesterfield and Johannes Doerfert and Barbara Chapman},

   year={2021},

   eprint={2106.03219},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1860

views

GPU runtimes are historically implemented in CUDA or other vendor specific languages dedicated to GPU programming. In this work we show that OpenMP 5.1, with minor compiler extensions, is capable of replacing existing solutions without a performance penalty. The result is a performant and portable GPU runtime that can be compiled with LLVM/Clang to Nvidia and AMD GPUs without the need for CUDA or HIP during its development and compilation. While we tried to be OpenMP compliant, we identified the need for compiler extensions to achieve the CUDA performance with our OpenMP runtime. We hope that future versions of OpenMP adopt our extensions to make device programming in OpenMP also portable across compilers, not only across execution platforms. The library we ported to OpenMP is the OpenMP device runtime that provides OpenMP functionality on the GPU. This work opens the door for shipping OpenMP offloading with a Linux distribution’s LLVM package as the package manager would not need a vendor SDK to build the compiler and runtimes. Furthermore, our OpenMP device runtime can support a new GPU target through the use of a few compiler intrinsics rather than requiring a reimplementation of the entire runtime.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: