8176

Lost in Translation: Challenges in Automating CUDA-to-OpenCL Translation

Paul Sathre, Mark Gardner, Wu-chun Feng
Dept. of Computer Science, Virginia Tech, USA
5th International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2), 2012

@InProceedings{sathre-p2s2-2012-cu2cl,

   author={Sathre, Paul and Gardner, Mark and Feng, Wu-chun},

   title={"{Lost in Translation: Challenges in Automating CUDA-to-OpenCL Translation}"},

   booktitle={5th International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2)},

   address={Pittsburgh, PA},

   month={September},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

1436

views

The use of accelerators in high-performance computing is increasing. The most commonly used accelerator is the graphics processing unit (GPU) because of its low cost and massively parallel performance. The two most common programming environments for GPU accelerators are CUDA and OpenCL. While CUDA runs natively only on NVIDIA GPUs, OpenCL is an open standard that can run on a variety of hardware processing platforms, including NVIDIA GPUs, AMD GPUs, and Intel or AMD CPUs. Given the abundance of GPU applications written in CUDA, we seek to leverage this investment in CUDA and enable CUDA programs to "run anywhere" via a CUDA-to-OpenCL sourceto-source translator. The resultant OpenCL versions permit the GPU-accelerated codes to run on a wider variety of processors that would not otherwise be possible. However, robust sourceto-source translation from CUDA to OpenCL faces a myriad of challenges. As such, this paper identifies those challenges and presents a classification of CUDA language idioms that present practical impediments to automatic translation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: