27079

A Container-Based Workflow for Distributed Training of Deep Learning Algorithms in HPC Clusters

Jose González-Abad, Álvaro López García, Valentin Y. Kozlov
Instituto de Fısica de Cantabria (IFCA), CSIC-Universidad de Cantabria, Santander, Spain
arXiv:2208.02498 [cs.DC], (4 Aug 2022)

@misc{https://doi.org/10.48550/arxiv.2208.02498,

   doi={10.48550/ARXIV.2208.02498},

   url={https://arxiv.org/abs/2208.02498},

   author={González-Abad, Jose and García, Álvaro López and Kozlov, Valentin Y.},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={A Container-Based Workflow for Distributed Training of Deep Learning Algorithms in HPC Clusters},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution 4.0 International}

}

Deep learning has been postulated as a solution for numerous problems in different branches of science. Given the resource-intensive nature of these models, they often need to be executed on specialized hardware such graphical processing units (GPUs) in a distributed manner. In the academic field, researchers get access to this kind of resources through High Performance Computing (HPC) clusters. This kind of infrastructures make the training of these models difficult due to their multi-user nature and limited user permission. In addition, different HPC clusters may possess different peculiarities that can entangle the research cycle (e.g., libraries dependencies). In this paper we develop a workflow and methodology for the distributed training of deep learning models in HPC clusters which provides researchers with a series of novel advantages. It relies on udocker as containerization tool and on Horovod as library for the distribution of the models across multiple GPUs. udocker does not need any special permission, allowing researchers to run the entire workflow without relying on any administrator. Horovod ensures the efficient distribution of the training independently of the deep learning framework used. Additionally, due to containerization and specific features of the workflow, it provides researchers with a cluster-agnostic way of running their models. The experiments carried out show that the workflow offers good scalability in the distributed training of the models and that it easily adapts to different clusters.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: