hiCUDA: a high-level directive-based language for GPU programming
The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada M5S 3G4
In GPGPU-2: Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units (2009), pp. 52-61.
@conference{han2009hi,
title={hi CUDA: a high-level directive-based language for GPU programming},
author={Han, T.D. and Abdelrahman, T.S.},
booktitle={Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units},
pages={52–61},
year={2009},
organization={ACM}
}
The Compute Unified Device Architecture (CUDA) has become a de facto standard for programming NVIDIA GPUs. However, CUDA places on the programmer the burden of packaging GPU code in separate functions, of explicitly managing data transfer between the host memory and various components of the GPU memory, and of manually optimizing the utilization of the GPU memory. Practical experience shows that the programmer needs to make significant code changes, which are often tedious and error-prone, before getting an optimized program. We have designed hi CUDA, a high-level directive-based language for CUDA programming. It allows programmers to perform these tedious tasks in a simpler manner, and directly to the sequential code. Nonetheless, it supports the same programming paradigm already familiar to CUDA programmers. We have prototyped a source-to-source compiler that translates a hi CUDA program to a CUDA program. Experiments using five standard CUDA bechmarks show that the simplicity and flexibility hi CUDA provides come at no expense to performance.
November 23, 2010 by hgpu