A compiler framework for optimization of affine loop nests for gpgpus

Muthu M. Baskaran, Uday Bondhugula, Sriram Krishnamoorthy, J. Ramanujam, Atanas Rountev, P. Sadayappan
Department of Computer, Science and Engg., The Ohio State University
In ICS ’08: Proceedings of the 22nd annual international conference on Supercomputing (2008), pp. 225-234


   title={A compiler framework for optimization of affine loop nests for gpgpus},

   author={Baskaran, M.M. and Bondhugula, U. and Krishnamoorthy, S. and Ramanujam, J. and Rountev, A. and Sadayappan, P.},

   booktitle={Proceedings of the 22nd annual international conference on Supercomputing},





Download Download (PDF)   View View   Source Source   



GPUs are a class of specialized parallel architectures with tremendous computational power. The new Compute Unified Device Architecture (CUDA) programming model from NVIDIA facilitates programming of general purpose applications on their GPUs. However, manual development of high-performance parallel code for GPUs is still very challenging. In this paper, a number of issues are addressed towards the goal of developing a compiler framework for automatic parallelization and performance optimization of affine loop nests on GPGPUs: 1) approach to program transformation for efficient data access from GPU global memory, using a polyhedral compiler model of data dependence abstraction and program transformation; 2) determination of optimal padding factors for conflict-minimal data access from GPU shared memory; and 3) model-driven empirical search to determine optimal parameters for unrolling and tiling. Experimental results on a number of kernels demonstrate the effectiveness of the compiler optimization approaches developed.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: