CuPBoP-AMD: Extending CUDA to AMD Platforms
Georgia Institute of Technology, Atlanta, Georgia, USA
Proceedings of the SC ’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (SC-W ’23), 2023
@inproceedings{chen2023cupbop,
title={CuPBoP-AMD: Extending CUDA to AMD Platforms},
author={Chen, Jun and Zhou, Xule and Kim, Hyesoon},
booktitle={Proceedings of the SC’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis},
pages={1093–1104},
year={2023}
}
The proliferation of artificial intelligence applications has underscored the need for increased portability among graphic processing units (GPUs) from different vendors. With CUDA as one of the most popular GPU programming languages, CuPBoP (CUDA for Parallelized and Broad-range Processors) aims to provide NVIDIA’s proprietary CUDA language support to a variety of GPU and CPU platforms by translating CUDA programs at the LLVM/NVVM IR level. Our work extends CuPBoP to AMD GPUs as CuPBoP-AMD. CuPBoPAMD is a CUDA translator that translates CUDA programs at NVVM IR level to HIP-compatible IR that can run on AMD GPUs. Currently, CuPBoP-AMD translates a broader range of applications in the Rodinia benchmark suite while maintaining approximately equal performance than the existing state-of-the-art AMD-developed translator, HIPIFY, without requiring programmer intervention.
December 3, 2023 by hgpu