5279

A mapping path for multi-GPGPU accelerated computers from a portable high level programming abstraction

Allen Leung, Nicolas Vasilache, Benoit Meister, Muthu Baskaran, David Wohlford, Cedric Bastoul, Richard Lethin
Reservoir Labs, New York, NY
Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units, GPGPU ’10, 2010

@inproceedings{leung2010mapping,

   title={A mapping path for multi-GPGPU accelerated computers from a portable high level programming abstraction},

   author={Leung, A. and Vasilache, N. and Meister, B. and Baskaran, M. and Wohlford, D. and Bastoul, C. and Lethin, R.},

   booktitle={Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units},

   pages={51–61},

   year={2010},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

559

views

Programmers for GPGPU face rapidly changing substrate of programming abstractions, execution models, and hardware implementations. It has been established, through numerous demonstrations for particular conjunctions of application kernel, programming languages, and GPU hardware instance, that it is possible to achieve significant improvements in the price/performance and energy/performance over general purpose processors. But these demonstrations are each the result of significant dedicated programmer labor, which is likely to be duplicated for each new GPU hardware architecture to achieve performance portability. This paper discusses the implementation, in the R-Stream compiler, of a source to source mapping pathway from a high-level, textbook-style algorithm expression method in ANSI C, to multi-GPGPU accelerated computers. The compiler performs hierarchical decomposition and parallelization of the algorithm between and across host, multiple GPGPUs, and within-GPU. The semantic transformations are expressed within the polyhedral model, including optimization of integrated parallelization, locality, and contiguity tradeoffs. Hierarchical tiling is performed. Communication and synchronizations operations at multiple levels are generated automatically. The resulting mapping is currently emitted in the CUDA programming language. The GPU backend adds to the range of hardware and accelerator targets for R-Stream and indicates the potential for performance portability of single sources across multiple hardware targets.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: