Efficient Communications in Training Large Scale Neural Networks
School of Computer Science, Georgia Institute of Technology
arXiv:1611.04255 [cs.DC], (14 Nov 2016)
@article{wang2016efficient,
title={Efficient Communications in Training Large Scale Neural Networks},
author={Wang, Linnan and Wu, Wei and Bosilca, George and Vuduc, Richard and Xu, Zenglin},
year={2016},
month={nov},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
We consider the problem of how to reduce the cost of communication that is required for the parallel training of a neural network. The state-of-the-art method, Bulk Synchronous Parallel Stochastic Gradient Descent (BSP-SGD), requires many collective communication operations, like broadcasts of parameters or reductions for sub-gradient aggregations, which for large messages quickly dominates overall execution time and limits parallel scalability. To address this problem, we develop a new technique for collective operations, referred to as Linear Pipelining (LP). It is tuned to the message sizes that arise in BSP-SGD, and works effectively on multi-GPU systems. Theoretically, the cost of LP is invariant to P, where P is the number of GPUs, while the cost of more conventional Minimum Spanning Tree (MST) scales like $O(log P)$. LP also demonstrate up to 2x faster bandwidth than Bidirectional Exchange (BE) techniques that are widely adopted by current MPI implementations. We apply these collectives to BSP-SGD, showing that the proposed implementations reduce communication bottlenecks in practice while preserving the attractive convergence properties of BSP-SGD.
November 16, 2016 by hgpu