Theano-MPI: a Theano-based Distributed Training Framework
School of Engineering, University of Guelph, CA
arXiv:1605.08325 [cs.LG], (26 May 2016)
@article{ma2016theanompi,
title={Theano-MPI: a Theano-based Distributed Training Framework},
author={Ma, He and Mao, Fei and Taylor, Graham W.},
year={2016},
month={may},
archivePrefix={"arXiv"},
primaryClass={cs.LG}
}
We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning.
May 28, 2016 by hgpu