12992

Parallel training of Deep Neural Networks with Natural Gradient and Parameter Averaging

Daniel Povey, Xiaohui Zhang, Sanjeev Khudanpur
Center for Language and Speech Processing, Johns Hopkins University
arXiv:1410.7455 [cs.NE], (27 Oct 2014)

@article{2014arXiv1410.7455P,

   author={Povey}, D. and {Zhang}, X. and {Khudanpur}, S.},

   title={"{Parallel training of Deep Neural Networks with Natural Gradient and Parameter Averaging}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1410.7455},

   keywords={Computer Science – Neural and Evolutionary Computing, Computer Science – Learning, Statistics – Machine Learning},

   year={2014},

   month={oct},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1410.7455P},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

3230

views

We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multi-core machines. In order to be as hardware-agnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: