29387

Exploring GPU-to-GPU Communication: Insights into Supercomputer Interconnects

Daniele De Sensi, Lorenzo Pichetti, Flavio Vella, Tiziano De Matteis, Zebin Ren, Luigi Fusco, Matteo Turisini, Daniele Cesarini, Kurt Lust, Animesh Trivedi, Duncan Roweth, Filippo Spiga, Salvatore Di Girolamo, Torsten Hoefler
Sapienza University of Rome
arXiv:2408.14090 [cs.DC], (26 Aug 2024)

@misc{desensi2024exploringgputogpucommunicationinsights,

   title={Exploring GPU-to-GPU Communication: Insights into Supercomputer Interconnects},

   author={Daniele De Sensi and Lorenzo Pichetti and Flavio Vella and Tiziano De Matteis and Zebin Ren and Luigi Fusco and Matteo Turisini and Daniele Cesarini and Kurt Lust and Animesh Trivedi and Duncan Roweth and Filippo Spiga and Salvatore Di Girolamo and Torsten Hoefler},

   year={2024},

   eprint={2408.14090},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2408.14090}

}

Download Download (PDF)   View View   Source Source   

616

views

Multi-GPU nodes are increasingly common in the rapidly evolving landscape of exascale supercomputers. On these systems, GPUs on the same node are connected through dedicated networks, with bandwidths up to a few terabits per second. However, gauging performance expectations and maximizing system efficiency is challenging due to different technologies, design options, and software layers. This paper comprehensively characterizes three supercomputers – Alps, Leonardo, and LUMI – each with a unique architecture and design. We focus on performance evaluation of intra-node and inter-node interconnects on up to 4096 GPUs, using a mix of intra-node and inter-node benchmarks. By analyzing its limitations and opportunities, we aim to offer practical guidance to researchers, system architects, and software developers dealing with multi-GPU supercomputing. Our results show that there is untapped bandwidth, and there are still many opportunities for optimization, ranging from network to software optimization.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: