28154

Portability and Scalability of OpenMP Offloading on State-of-the-art Accelerators

Yehonatan Fridman, Guy Tamir, Gal Oren
Department of Computer Science, Ben-Gurion University of the Negev, Israel
arXiv:2304.04276 [cs.DC], (9 Apr 2023)

@misc{fridman2023portability,

   title={Portability and Scalability of OpenMP Offloading on State-of-the-art Accelerators},

   author={Yehonatan Fridman and Guy Tamir and Gal Oren},

   year={2023},

   eprint={2304.04276},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Over the last decade, most of the increase in computing power has been gained by advances in accelerated many-core architectures, mainly in the form of GPGPUs. While accelerators achieve phenomenal performances in various computing tasks, their utilization requires code adaptations and transformations. Thus, OpenMP, the most common standard for multi-threading in scientific computing applications, introduced offloading capabilities between host (CPUs) and accelerators since v4.0, with increasing support in the successive v4.5, v5.0, v5.1, and the latest v5.2 versions. Recently, two state-of-the-art GPUs – the Intel Ponte Vecchio Max 1100 and the NVIDIA A100 GPUs – were released to the market, with the oneAPI and GNU LLVM-backed compilation for offloading, correspondingly. In this work, we present early performance results of OpenMP offloading capabilities to these devices while specifically analyzing the potability of advanced directives (using SOLLVE’s OMPVV test suite) and the scalability of the hardware in representative scientific mini-app (the LULESH benchmark). Our results show that the vast majority of the offloading directives in v4.5 and 5.0 are supported in the latest oneAPI and GNU compilers; however, the support in v5.1 and v5.2 is still lacking. From the performance perspective, we found that PVC is up to 37% better than the A100 on the LULESH benchmark, presenting better performance in computing and data movements.
No votes yet.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: