17811

Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs

Shaohuai Shi, Xiaowen Chu
Department of Computer Science, Hong Kong Baptist University
arXiv:1711.05979 [cs.DC], (16 Nov 2017)

@article{shi2017performance,

   title={Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs},

   author={Shi, Shaohuai and Chu, Xiaowen},

   year={2017},

   month={nov},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Deep learning frameworks have been widely deployed on GPU servers for deep learning applications in both academia and industry. In the training of deep neural networks (DNNs), there are many standard processes or algorithms, such as convolution and stochastic gradient descent (SGD), but the running performance of different frameworks might be different even running the same deep model on the same GPU hardware. In this paper, we evaluate the running performance of four state-of-the-art distributed deep learning frameworks (i.e., Caffe-MPI, CNTK, MXNet and TensorFlow) over single-GPU, multi-GPU and multi-node environments. We first build performance models of standard processes in training DNNs with SGD, and then we benchmark the running performance of these frameworks with three popular convolutional neural networks (i.e., AlexNet, GoogleNet and ResNet-50), after that we analyze what factors that results in the performance gap among these four frameworks. Through both analytical and experimental analysis, we identify bottlenecks and overheads which could be further optimized. The main contribution is two-fold. First, the testing results provide a reference for end users to choose the proper framework for their own scenarios. Second, the proposed performance models and the detailed analysis provide further optimization directions in both algorithmic design and system configuration.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: