A compiler framework for optimization of affine loop nests for gpgpus
Department of Computer, Science and Engg., The Ohio State University
In ICS ’08: Proceedings of the 22nd annual international conference on Supercomputing (2008), pp. 225-234
@conference{baskaran2008compiler,
title={A compiler framework for optimization of affine loop nests for gpgpus},
author={Baskaran, M.M. and Bondhugula, U. and Krishnamoorthy, S. and Ramanujam, J. and Rountev, A. and Sadayappan, P.},
booktitle={Proceedings of the 22nd annual international conference on Supercomputing},
pages={225–234},
year={2008},
organization={ACM}
}
GPUs are a class of specialized parallel architectures with tremendous computational power. The new Compute Unified Device Architecture (CUDA) programming model from NVIDIA facilitates programming of general purpose applications on their GPUs. However, manual development of high-performance parallel code for GPUs is still very challenging. In this paper, a number of issues are addressed towards the goal of developing a compiler framework for automatic parallelization and performance optimization of affine loop nests on GPGPUs: 1) approach to program transformation for efficient data access from GPU global memory, using a polyhedral compiler model of data dependence abstraction and program transformation; 2) determination of optimal padding factors for conflict-minimal data access from GPU shared memory; and 3) model-driven empirical search to determine optimal parameters for unrolling and tiling. Experimental results on a number of kernels demonstrate the effectiveness of the compiler optimization approaches developed.
December 12, 2010 by hgpu