A Massive Data Parallel Computational Framework on Petascale/Exascale Hybrid Computer Systems

Marek Blazewicz, Steven R. Brandt, Peter Diener, David M. Koppelman, Krzysztof Kurowski, Frank Loffler, Erik Schnetter, Jian Tao
Applications Department, Poznan Supercomputing and Networking Center, Poland
ParCo 2011


   title={A Massive Data Parallel Computational Framework on Petascale/Exascale Hybrid Computer Systems},

   author={BLAZEWICZ, M. and BRANDT, S.R. and DIENER, P. and KOPPELMAN, D.M. and KUROWSKI, K. and L{"O}FFLER, F. and SCHNETTER, E. and TAO, J.},



Download Download (PDF)   View View   Source Source   Source codes Source codes




Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA [1] and OpenCL [2] it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge [3] (a library based framework for heterogeneous multi-core systems), Zippy [4] (a framework for parallel execution of codes on multiple GPU’s), BSGP [5] (a new programming language for general purpose computation on the GPU) and CUDA-lite [6] (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPU’s [7] and for automatic translation of OpenMP parallelized codes to CUDA [8]. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework [9,10] As the first non-trivial demonstration of its usefulness, we successfully developed a new 3D CFD code that achieves improved performance.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: