Performance and Portability of Accelerated Lattice Boltzmann Applications with OpenACC

E. Calore, A. Gabbana, J. Kraus, S. F. Schifano, R. Tripiccione
Dip. di Fisica e Scienze della Terra, University of Ferrara, and INFN, Ferrara (Italy)
arXiv:1703.00186 [cs.DC], (1 Mar 2017)


   title={Performance and Portability of Accelerated Lattice Boltzmann Applications with OpenACC},

   author={Calore, E. and Gabbana, A. and Kraus, J. and Schifano, S. F. and Tripiccione, R.},







Download Download (PDF)   View View   Source Source   



An increasingly large number of HPC systems rely on heterogeneous architectures combining traditional multi-core CPUs with power efficient accelerators. Designing efficient applications for these systems has been troublesome in the past as accelerators could usually be programmed using specific programming languages threatening maintainability, portability and correctness. Several new programming environments try to tackle this problem. Among them, OpenACC offers a high-level approach based on compiler directive clauses to mark regions of existing C, C++ or Fortran codes to run on accelerators. This approach directly addresses code portability, leaving to compilers the support of each different accelerator, but one has to carefully assess the relative costs of portable approaches versus computing efficiency. In this paper we address precisely this issue, using as a test-bench a massively parallel Lattice Boltzmann algorithm. We first describe our multi-node implementation and optimization of the algorithm, using OpenACC and MPI. We then benchmark the code on a variety of processors, including traditional CPUs and GPUs, and make accurate performance comparisons with other GPU implementations of the same algorithm using CUDA and OpenCL. We also asses the performance impact associated to portable programming, and the actual portability and performance-portability of OpenACC-based applications across several state-of-the- art architectures.
No votes yet.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: