27717

AutoDDL: Automatic Distributed Deep Learning with Asymptotically Optimal Communication

Jinfan Chen, Shigang Li, Ran Gun, Jinhui Yuan, Torsten Hoefler
Department of Computer Science, ETH Zurich
arXiv:2301.06813 [cs.DC], (17 Jan 2023)

@misc{https://doi.org/10.48550/arxiv.2301.06813,

   doi={10.48550/ARXIV.2301.06813},

   url={https://arxiv.org/abs/2301.06813},

   author={Chen, Jinfan and Li, Shigang and Gun, Ran and Yuan, Jinhui and Hoefler, Torsten},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={AutoDDL: Automatic Distributed Deep Learning with Asymptotically Optimal Communication},

   publisher={arXiv},

   year={2023},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Recent advances in deep learning base on growing model sizes and the necessary scaling of compute power. Training such large-scale models requires an intricate combination of data-, operator-, and pipeline parallelism in complex distributed systems. We show how to use OneFlow’s Split, Broadcast, and Partial Sum (SBP) tensor formulations to enable new distributed training methods with asymptotically optimal communication overheads. Using these insights, we develop AutoDDL, a distributed training framework that combines an exhaustive performance model and automated configuration search to find distributions with near-optimal communication overheads. We conduct evaluations on Multi-Node-Single-GPU and Multi-Node-Multi-GPU machines using different models, including VGG and Transformer. Compared to expert-optimized implementations, AutoDDL reduces the end-to-end training time by up to 31.1% and 10% for Transformer and up to 17.7% and 71.5% for VGG on the two different systems, respectively.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: