29082

Training DNN Models over Heterogeneous Clusters with Optimal Performance

Chengyi Nie, Jessica Maghakian, Zhenhua Liu
Stony Brook University, Stony Brook, NY, USA
arXiv:2402.05302 [cs.DC]

@misc{nie2024training,

   title={Training DNN Models over Heterogeneous Clusters with Optimal Performance},

   author={Chengyi Nie and Jessica Maghakian and Zhenhua Liu},

   year={2024},

   eprint={2402.05302},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

631

views

Adjusting batch sizes and adaptively tuning other hyperparameters can significantly speed up deep neural network (DNN) training. Despite the ubiquity of heterogeneous clusters, existing adaptive DNN training techniques solely consider homogeneous environments. Optimizing distributed DNN training over heterogeneous clusters is technically challenging, and directly adapting existing techniques results in low utilization and poor performance. To solve this problem, we introduce Cannikin — a novel data-parallel distributed training system. Cannikin achieves efficient and near-optimal performance by accurately modeling the optimal system performance and predicting adaptive batch size training metrics for DNNs in heterogeneous clusters. We implemented Cannikin in PyTorch and conducted experiments over 16 GPUs in Chameleon. Empirical results show that Cannikin reduces DNN training in heterogeneous clusters by up to 52% compared to the state-of-the-art adaptive training system and up to 85% compared to native PyTorch DistributedDataParallel.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: