Exploring GPU-to-GPU Communication: Insights into Supercomputer Interconnects
Sapienza University of Rome
arXiv:2408.14090 [cs.DC], (26 Aug 2024)
@misc{desensi2024exploringgputogpucommunicationinsights,
title={Exploring GPU-to-GPU Communication: Insights into Supercomputer Interconnects},
author={Daniele De Sensi and Lorenzo Pichetti and Flavio Vella and Tiziano De Matteis and Zebin Ren and Luigi Fusco and Matteo Turisini and Daniele Cesarini and Kurt Lust and Animesh Trivedi and Duncan Roweth and Filippo Spiga and Salvatore Di Girolamo and Torsten Hoefler},
year={2024},
eprint={2408.14090},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2408.14090}
}
Multi-GPU nodes are increasingly common in the rapidly evolving landscape of exascale supercomputers. On these systems, GPUs on the same node are connected through dedicated networks, with bandwidths up to a few terabits per second. However, gauging performance expectations and maximizing system efficiency is challenging due to different technologies, design options, and software layers. This paper comprehensively characterizes three supercomputers – Alps, Leonardo, and LUMI – each with a unique architecture and design. We focus on performance evaluation of intra-node and inter-node interconnects on up to 4096 GPUs, using a mix of intra-node and inter-node benchmarks. By analyzing its limitations and opportunities, we aim to offer practical guidance to researchers, system architects, and software developers dealing with multi-GPU supercomputing. Our results show that there is untapped bandwidth, and there are still many opportunities for optimization, ranging from network to software optimization.
September 1, 2024 by hgpu