A Strategy for Automatically Generating High Performance CUDA Code for a GPU Accelerator from a Specialized Fortran Code Expression

Pei-Hung Lin, Jagan Jayaraj, and Paul R. Woodward
Laboratory for Computational Science & Engineering, University of Minnesota
Symposium on Application Accelerators in High Performance Computing, 2010


   title={A Strategy for Automatically Generating High Performance CUDA Code for a GPU Accelerator from a Specialized Fortran Code Expression},

   author={Lin, P.H. and Jayaraj, J. and Woodward, P.R.},

   booktitle={Application Accelerators in High Performance Computing, 2010 Symposium, Papers},



Download Download (PDF)   View View   Source Source   



Recent microprocessor designs concentrate upon adding cores rather than increasing clock speeds in order to achieve enhanced performance. As a result, in the last few years computational accelerators featuring many cores per chip have begun to appear in high performance scientific computing systems. The IBM Cell processor, with its 9 heterogeneous cores, was the first such accelerator to appear in a large scientific computing system, the Roadrunner machine at Los Alamos. Still more highly parallel graphic processing units (GPUs) are now beginning to be adopted as accelerators for scientific applications. For GPUs the potential performance is even greater than for Cell, but, as for Cell, there are multiple hurdles that must be surmounted in order to achieve this performance. We can think of multicore CPUs, like the Intel Nehalem-EX and IBM Power-7, as on one end of a spectrum defined by a large ratio of on-chip memory relative to peak processing power. The Cell processor has a smaller value of this ratio, while GPUs have still much smaller values. As one goes from large to small values of this ratio, in our view, the programming effort required to achieve a substantial fraction of the potential peak performance becomes greater. Our team has had good experience in meeting the programming challenges of the Cell processor [1-3], and we have developed a strategy for extending that work to GPUs. Our plan is to produce an automated code translator, as we have done for Cell, in order to reduce the GPU programming burden for scientific applications. In this paper we outline our strategy and give some very preliminary results.
No votes yet.
Please wait...

* * *

* * *

Featured events

Hida Takayama, Japan

The Third International Workshop on GPU Computing and AI (GCA), 2018

Nagoya University, Japan

The 5th International Conference on Power and Energy Systems Engineering (CPESE), 2018

MediaCityUK, Salford Quays, Greater Manchester, England

The 10th International Conference on Information Management and Engineering (ICIME), 2018

No. 1037, Luoyu Road, Hongshan District, Wuhan, China

The 4th International Conference on Control Science and Systems Engineering (ICCSSE), 2018

Nanyang Executive Centre in Nanyang Technological University, Singapore

The 2018 International Conference on Cloud Computing and Internet of Things (CCIOT’18), 2018

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: