17335

Parle: parallelizing stochastic gradient descent

Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto, Ameet Talwalkar
Computer Science Department, University of California, Los Angeles
arXiv:1707.00424 [cs.LG], (3 Jul 2017)

@article{chaudhari2017parle,

   title={Parle: parallelizing stochastic gradient descent},

   author={Chaudhari, Pratik and Baldassi, Carlo and Zecchina, Riccardo and Soatto, Stefano and Talwalkar, Ameet},

   year={2017},

   month={jul},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters. We exploit the phenomenon of flat minima that has been shown to lead to improved generalization error for deep networks. Parle requires very infrequent communication with the parameter server and instead performs more computation on each client, which makes it well-suited to both single-machine, multi-GPU settings and distributed implementations.
Rating: 1.8/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: