Stencil Computations on AMD and Nvidia Graphics Processors: Performance and Tuning Strategies
Department of Computer Science, Aalto University, Espoo, 02150, Finland
arXiv:2406.08923 [cs.DC], (13 Jun 2024)
@misc{pekkilä2024stencil,
title={Stencil Computations on AMD and Nvidia Graphics Processors: Performance and Tuning Strategies},
author={Johannes Pekkilä and Oskar Lappi and Fredrik Robertsén and Maarit J. Korpi-Lagg},
year={2024},
eprint={2406.08923},
archivePrefix={arXiv},
primaryClass={id=’cs.DC’ full_name=’Distributed, Parallel, and Cluster Computing’ is_active=True alt_name=None in_archive=’cs’ is_general=False description=’Covers fault-tolerance, distributed algorithms, stabilility, parallel computation, and cluster computing. Roughly includes material in ACM Subject Classes C.1.2, C.1.4, C.2.4, D.1.3, D.4.5, D.4.7, E.1.’}
}
Over the last ten years, graphics processors have become the de facto accelerator for data-parallel tasks in various branches of high-performance computing, including machine learning and computational sciences. However, with the recent introduction of AMD-manufactured graphics processors to the world’s fastest supercomputers, tuning strategies established for previous hardware generations must be re-evaluated. In this study, we evaluate the performance and energy efficiency of stencil computations on modern datacenter graphics processors, and propose a tuning strategy for fusing cache-heavy stencil kernels. The studied cases comprise both synthetic and practical applications, which involve the evaluation of linear and nonlinear stencil functions in one to three dimensions. Our experiments reveal that AMD and Nvidia graphics processors exhibit key differences in both hardware and software, necessitating platform-specific tuning to reach their full computational potential.
June 16, 2024 by hgpu