26352

HipBone: A performance-portable GPU-accelerated C++ version of the NekBone benchmark

Noel Chalmers, Abhishek Mishra, Damon McDougall, Tim Warburton
Data Center GPU and Accelerated Processing, Advanced Micro Devices Inc., 7373 Southwest Pwky., Austin, TX, 78735
arXiv:2202.12477 [cs.DC], (25 Feb 2022)

@article{chalmers2022hipbone,

   title={HipBone: A performance-portable GPU-accelerated C++ version of the NekBone benchmark},

   author={Chalmers, Noel and Mishra, Abhishek and McDougall, Damon and Warburton, Tim},

   journal={arXiv preprint arXiv:2202.12477},

   year={2022}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

421

views

We present hipBone, an open source performance-portable proxy application for the Nek5000 (and NekRS) CFD applications. HipBone is a fully GPU-accelerated C++ implementation of the original NekBone CPU proxy application with several novel algorithmic and implementation improvements which optimize its performance on modern fine-grain parallel GPU accelerators. Our optimizations include a conversion to store the degrees of freedom of the problem in assembled form in order to reduce the amount of data moved during the main iteration and a portable implementation of the main Poisson operator kernel. We demonstrate near-roofline performance of the operator kernel on three different modern GPU accelerators from two different vendors. We present a novel algorithm for splitting the application of the Poisson operator on GPUs which aggressively hides MPI communication required for both halo exchange and assembly. Our implementation of nearest-neighbor MPI communication then leverages several different routing algorithms and GPU-Direct RDMA capabilities, when available, which improves scalability of the benchmark. We demonstrate the performance of hipBone on three different clusters housed at Oak Ridge National Laboratory, namely the Summit supercomputer and the Frontier early-access clusters, Spock and Crusher. Our tests demonstrate both portability across different clusters and very good scaling efficiency, especially on large problems.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: