29863

Scalability Evaluation of HPC Multi-GPU Training for ECG-based LLMs

Dimitar Mileski, Nikola Petrovski, Marjan Gusev
Ss. Cyril and Methodius University in Skopje, Faculty of Computer Science and Engineering 1000, Skopje, North Macedonia
arXiv:2503.21033 [cs.DC], (26 Mar 2025)
BibTeX

Download Download (PDF)   View View   Source Source   

445

views

Training large language models requires extensive processing, made possible by many high-performance computing resources. This study compares multi-node and multi-GPU environments for training large language models of electrocardiograms. It provides a detailed mapping of current frameworks for distributed deep learning in multinode and multi-GPU settings, including Horovod from Uber, DeepSpeed from Microsoft, and the built-in distributed capabilities of PyTorch and TensorFlow. We compare various multi-GPU setups for different dataset configurations, utilizing multiple HPC nodes independently and focusing on scalability, speedup, efficiency, and overhead. The analysis leverages HPC infrastructure with SLURM, Apptainer (Singularity) containers, CUDA, PyTorch, and shell scripts to support training workflows and automation. We achieved a sub-linear speedup when scaling the number of GPUs, with values of 1.6x for two and 1.9x for four.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hpgu.org