29620

Asynchronous-Many-Task Systems: Challenges and Opportunities – Scaling an AMR Astrophysics Code on Exascale machines using Kokkos and HPX

Gregor Daiß, Patrick Diehl, Jiakun Yan, John K. Holmen, Rahulkumar Gayatri, Christoph Junghans, Alexander Straub, Jeff R. Hammond, Dominic Marcello, Miwako Tsuji, Dirk Pflüger, Hartmut Kaiser
University of Stuttgart, Stuttgart, 70569 Stuttgart, Germany
arXiv:2412.15518 [cs.DC], (20 Dec 2024)

@misc{daiß2024asynchronousmanytasksystemschallengesopportunities,

   title={Asynchronous-Many-Task Systems: Challenges and Opportunities — Scaling an AMR Astrophysics Code on Exascale machines using Kokkos and HPX},

   author={Gregor Daiß and Patrick Diehl and Jiakun Yan and John K. Holmen and Rahulkumar Gayatri and Christoph Junghans and Alexander Straub and Jeff R. Hammond and Dominic Marcello and Miwako Tsuji and Dirk Pflüger and Hartmut Kaiser},

   year={2024},

   eprint={2412.15518},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2412.15518}

}

Dynamic and adaptive mesh refinement is pivotal in high-resolution, multi-physics, multi-model simulations, necessitating precise physics resolution in localized areas across expansive domains. Today’s supercomputers’ extreme heterogeneity presents a significant challenge for dynamically adaptive codes, highlighting the importance of achieving performance portability at scale. Our research focuses on astrophysical simulations, particularly stellar mergers, to elucidate early universe dynamics. We present Octo-Tiger, leveraging Kokkos, HPX, and SIMD for portable performance at scale in complex, massively parallel adaptive multi-physics simulations. Octo-Tiger supports diverse processors, accelerators, and network backends. Experiments demonstrate exceptional scalability across several heterogeneous supercomputers including Perlmutter, Frontier, and Fugaku, encompassing major GPU architectures and x86, ARM, and RISC-V CPUs. Parallel efficiency of 47.59% (110,080 cores and 6880 hybrid A100 GPUs) on a full-system run on Perlmutter (26% HPCG peak performance) and 51.37% (using 32,768 cores and 2,048 MI250X) on Frontier are achieved.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: