28879

Principles for Automated and Reproducible Benchmarking

Tuomas Koskela, Ilektra Christidi, Mosè Giordano, Emily Dubrovska, Jamie Quinn, Christopher Maynard, Dave Case, Kaan Olgu, Tom Deakin
University College, London, Advanced Research Computing London, UK
Proceedings of the SC ’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis (SC-W ’23), 2023

@inproceedings{koskela2023principles,

   title={Principles for automated and reproducible benchmarking},

   author={Koskela, Tuomas and Christidi, Ilektra and Giordano, Mos{`e} and Dubrovska, Emily and Quinn, Jamie and Maynard, Christopher and Case, Dave and Olgu, Kaan and Deakin, Tom},

   booktitle={Proceedings of the SC’23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis},

   pages={609–618},

   year={2023}

}

The diversity in processor technology used by High Performance Computing (HPC) facilities is growing, and so applications must be written in such a way that they can attain high levels of performance across a range of different CPUs, GPUs, and other accelerators. Measuring application performance across this wide range of platforms becomes crucial, but there are significant challenges to do this rigorously, in a time efficient way, whilst assuring results are scientifically meaningful, reproducible, and actionable. This paper presents a methodology for measuring and analysing the performance portability of a parallel application and shares a software framework which combines and extends adopted technologies to provide a usable benchmarking tool. We demonstrate the flexibility and effectiveness of the methodology and benchmarking framework by showcasing a variety of benchmarking case studies which utilise a stable of supercomputing resources at a national scale.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: