17843

Implementing implicit OpenMP data sharing on GPUs

Gheorghe-Teodor Bercea, Carlo Bertolli, Arpith C. Jacob, Alexandre Eichenberger, Alexey Bataev, Georgios Rokos, Hyojin Sung, Tong Chen, Kevin O’Brien
IBM TJ Watson Research Center, 1101 Kitchawan Rd., Yorktown Heights, NY 10598, USA
arXiv:1711.10413 [cs.PL], (28 Nov 2017)

@article{bercea2017implementing,

   title={Implementing implicit OpenMP data sharing on GPUs},

   author={Bercea, Gheorghe-Teodor and Bertolli, Carlo and Jacob, Arpith C. and Eichenberger, Alexandre and Bataev, Alexey and Rokos, Georgios and Sung, Hyojin and Chen, Tong and O’Brien, Kevin},

   year={2017},

   month={nov},

   archivePrefix={"arXiv"},

   primaryClass={cs.PL},

   doi={10.1145/3148173.3148189}

}

Download Download (PDF)   View View   Source Source   

2466

views

OpenMP is a shared memory programming model which supports the offloading of target regions to accelerators such as NVIDIA GPUs. The implementation in Clang/LLVM aims to deliver a generic GPU compilation toolchain that supports both the native CUDA C/C++ and the OpenMP device offloading models. There are situations where the semantics of OpenMP and those of CUDA diverge. One such example is the policy for implicitly handling local variables. In CUDA, local variables are implicitly mapped to thread local memory and thus become private to a CUDA thread. In OpenMP, due to semantics that allow the nesting of regions executed by different numbers of threads, variables need to be implicitly shared among the threads of a contention group. In this paper we introduce a re-design of the OpenMP device data sharing infrastructure that is responsible for the implicit sharing of local variables in the Clang/LLVM toolchain. We introduce a new data sharing infrastructure that lowers implicitly shared variables to the shared memory of the GPU. We measure the amount of shared memory used by our scheme in cases that involve scalar variables and statically allocated arrays. The evaluation is carried out by offloading to K40 and P100 NVIDIA GPUs. For scalar variables the pressure on shared memory is relatively low, under 26% of shared memory utilization for the K40, and does not negatively impact occupancy. The limiting occupancy factor in that case is register pressure. The data sharing scheme offers the users a simple memory model for controlling the implicit allocation of device shared memory.
Rating: 3.0/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: