From CUDA to OpenCL: Towards a Performance-portable Solution for Multi-platform GPU Programming
University of Tennessee, Knoxville
University of Tennessee, Tech. report UT-CS-10-656, 2011
@techreport{dua2011cuda,
title={From CUDA to OpenCL: Towards a Performance-portable Solution for Multi-platform GPU Programming},
author={Dua, P. and Webera, R. and Luszczeka, P. and Tomova, S. and Petersona, G. and Dongarraa, J.},
institution={Technical Report CS-10-656, Electrical Engineering and Computer Science Department, University of Tennessee, 201 0. LAPACK Working Note 228},
year={2011}
}
In this work, we evaluate OpenCL as a programming tool for developing performance-portable applications for GPGPU. While the Khronos group developed OpenCL with programming portability in mind, performance is not necessarily portable. OpenCL has required performance-impacting initializations that do not exist in other languages such as CUDA. Understanding these implications allows us to provide a single library with decent performance on a variety of platforms. We choose triangular solver (TRSM) and matrix multiplication (GEMM) as representative level 3 BLAS routines to implement in OpenCL. We profile TRSM to get the time distribution of the OpenCL runtime system. We then provide tuned GEMM kernels for both the NVIDIA Tesla C2050 and ATI Radeon 5870, the latest GPUs offered by both companies. We explore the benefits of using the texture cache, the performance ramifications of copying data into images, discrepancies in the OpenCL and CUDA compilers’ optimizations, and other issues that affect the performance. Experimental results show that nearly 50% of peak performance can be obtained in GEMM on both GPUs in OpenCL. We also show that the performance of these kernels is not highly portable. Finally, we propose the use of auto-tuning to better explore these kernels’ parameter space using search harness.
September 26, 2011 by hgpu