16588

Distributed Training of Deep Neuronal Networks: Theoretical and Practical Limits of Parallel Scalability

Janis Keuper
Fraunhofer ITWM
arXiv:1609.06870 [cs.CV], (22 Sep 2016)

@article{keuper2016distributed,

   title={Distributed Training of Deep Neuronal Networks: Theoretical and Practical Limits of Parallel Scalability},

   author={Keuper, Janis},

   year={2016},

   month={sep},

   archivePrefix={"arXiv"},

   primaryClass={cs.CV}

}

Download Download (PDF)   View View   Source Source   

1384

views

This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neuronal Networks (DNNs). The presented results show, that the current state of the art approach, using data-parallelized Stochastic Gradient Descent (SGD), is quickly turning into a vastly communication bound problem. In addition, we present simple but fixed theoretic constraints, preventing effective scaling of DNN training beyond only a few dozen nodes. This leads to poor scalability of DNN training in most practical scenarios.
Rating: 1.8/5. From 5 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: