15924

Theano-MPI: a Theano-based Distributed Training Framework

He Ma, Fei Mao, Graham W. Taylor
School of Engineering, University of Guelph, CA
arXiv:1605.08325 [cs.LG], (26 May 2016)

@article{ma2016theanompi,

   title={Theano-MPI: a Theano-based Distributed Training Framework},

   author={Ma, He and Mao, Fei and Taylor, Graham W.},

   year={2016},

   month={may},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

2220

views

We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning.
Rating: 1.8/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: