28669

Exploring the Limits of Generic Code Execution on GPUs via Direct (OpenMP) Offload

S. Tian, B. Chapman, J. Doerfert
Shilei Tian, Barbara Chapman, Johannes Doerfert
International Workshop on OpenMP (IWOMP), 2023

@inproceedings{tian2023exploring,

   title={Exploring the Limits of Generic Code Execution on GPUs via Direct (OpenMP) Offload},

   author={Tian, Shilei and Chapman, Barbara and Doerfert, Johannes},

   booktitle={International Workshop on OpenMP},

   pages={179–192},

   year={2023},

   organization={Springer}

}

Download Download (PDF)   View View   Source Source   

611

views

GPUs are well-known for their remarkable ability to accelerate computations through massive parallelism. However, offloading computations to GPUs necessitates manual identification of code regions that should be executed on the device, memory that needs to be transferred, and synchronization to be handled. Recent work has leveraged the portable target offloading interface provided by LLVM/OpenMP, taking GPU acceleration to a new level. This approach, known as the direct GPU compilation scheme, involves compiling the entire host application for the GPU and executing it there, thereby eliminating the need for explicit offloading directives. Nonetheless, due to limitations of the current GPU compiler toolchain and execution, seamlessly executing CPU code on GPUs with certain features remains a significant challenge. In this paper, we examine the limits of CPU code execution on GPUs by applying the direct GPU compilation scheme to LLVM’s test-suite, analyze the encountered errors, and discuss potential solutions for enabling more code to execute on GPUs without any changes if feasible. By studying these issues, we shed light on how to improve GPU acceleration and make it more accessible to developers.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: