16723

Efficient Communications in Training Large Scale Neural Networks

Linnan Wang, Wei Wu, George Bosilca, Richard Vuduc, Zenglin Xu
School of Computer Science, Georgia Institute of Technology
arXiv:1611.04255 [cs.DC], (14 Nov 2016)

@article{wang2016efficient,

   title={Efficient Communications in Training Large Scale Neural Networks},

   author={Wang, Linnan and Wu, Wei and Bosilca, George and Vuduc, Richard and Xu, Zenglin},

   year={2016},

   month={nov},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1348

views

We consider the problem of how to reduce the cost of communication that is required for the parallel training of a neural network. The state-of-the-art method, Bulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many collective communication operations, like broadcasts of parameters or reductions for sub-gradient aggregations, which for large messages quickly dominates overall execution time and limits parallel scalability. To address this problem, we develop a new technique for collective operations, referred to as Linear Pipelining (LP). It is tuned to the message sizes that arise in BSP-SGD, and works effectively on multi-GPU systems. Theoretically, the cost of LP is invariant to P, where P is the number of GPUs, while the cost of more conventional Minimum Spanning Tree (MST) scales like $O(log P)$. LP also demonstrate up to 2x faster bandwidth than Bidirectional Exchange (BE) techniques that are widely adopted by current MPI implementations. We apply these collectives to BSP-SGD, showing that the proposed implementations reduce communication bottlenecks in practice while preserving the attractive convergence properties of BSP-SGD.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: