2882

A Strategy for Automatically Generating High Performance CUDA Code for a GPU Accelerator from a Specialized Fortran Code Expression

Pei-Hung Lin, Jagan Jayaraj, and Paul R. Woodward
Laboratory for Computational Science & Engineering, University of Minnesota
Symposium on Application Accelerators in High Performance Computing, 2010

@article{lin2010strategy,

   title={A Strategy for Automatically Generating High Performance CUDA Code for a GPU Accelerator from a Specialized Fortran Code Expression},

   author={Lin, P.H. and Jayaraj, J. and Woodward, P.R.},

   booktitle={Application Accelerators in High Performance Computing, 2010 Symposium, Papers},

   year={2010}

}

Download Download (PDF)   View View   Source Source   

1328

views

Recent microprocessor designs concentrate upon adding cores rather than increasing clock speeds in order to achieve enhanced performance. As a result, in the last few years computational accelerators featuring many cores per chip have begun to appear in high performance scientific computing systems. The IBM Cell processor, with its 9 heterogeneous cores, was the first such accelerator to appear in a large scientific computing system, the Roadrunner machine at Los Alamos. Still more highly parallel graphic processing units (GPUs) are now beginning to be adopted as accelerators for scientific applications. For GPUs the potential performance is even greater than for Cell, but, as for Cell, there are multiple hurdles that must be surmounted in order to achieve this performance. We can think of multicore CPUs, like the Intel Nehalem-EX and IBM Power-7, as on one end of a spectrum defined by a large ratio of on-chip memory relative to peak processing power. The Cell processor has a smaller value of this ratio, while GPUs have still much smaller values. As one goes from large to small values of this ratio, in our view, the programming effort required to achieve a substantial fraction of the potential peak performance becomes greater. Our team has had good experience in meeting the programming challenges of the Cell processor [1-3], and we have developed a strategy for extending that work to GPUs. Our plan is to produce an automated code translator, as we have done for Cell, in order to reduce the GPU programming burden for scientific applications. In this paper we outline our strategy and give some very preliminary results.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: