Experiences with High-Level Programming Directives for Porting Applications to GPUs
Computer Science and Mathematics Division, National Center for Oak Ridge National Laboratory
Facing the Multicore – Challenge II, Lecture Notes in Computer Science, Volume 7174/2012, 96-107, 2012
@article{hernandez2012experiences,
title={Experiences with High-Level Programming Directives for Porting Applications to GPUs},
author={Hernandez, O. and Ding, W. and Chapman, B. and Kartsaklis, C. and Sankaran, R. and Graham, R.},
journal={Facing the Multicore-Challenge II},
pages={96–107},
year={2012},
publisher={Springer}
}
HPC systems now exploit GPUs within their compute nodes to accelerate program performance. As a result, high-end application development has become extremely complex at the node level. In addition to restructuring the node code to exploit the cores and specialized devices, the programmer may need to choose a programming model such as OpenMP or CPU threads in conjunction with an accelerator programming model to share and manage the different node resources. This comes at a time when programmer productivity and the ability to produce portable code has been recognized as a major concern. In order to offset the high development cost of creating CUDA or OpenCL kernels, directives have been proposed for programming accelerator devices, but their implications are not well known. In this paper, we evaluate the state of the art accelerator directives to program several applications kernels, explore transformations to achieve good performance, and examine the expressivity and performance penalty of using high-level directives versus CUDA. We also compare our results to OpenMP implementations to understand the benefits of running the kernels in the accelerator versus CPU cores.
June 13, 2012 by hgpu