Parle: parallelizing stochastic gradient descent
Computer Science Department, University of California, Los Angeles
arXiv:1707.00424 [cs.LG], (3 Jul 2017)
@article{chaudhari2017parle,
title={Parle: parallelizing stochastic gradient descent},
author={Chaudhari, Pratik and Baldassi, Carlo and Zecchina, Riccardo and Soatto, Stefano and Talwalkar, Ameet},
year={2017},
month={jul},
archivePrefix={"arXiv"},
primaryClass={cs.LG}
}
We propose a new algorithm called Parle for parallel training of deep networks that converges 2-4x faster than a data-parallel implementation of SGD, while achieving significantly improved error rates that are nearly state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100, without introducing any additional hyper-parameters. We exploit the phenomenon of flat minima that has been shown to lead to improved generalization error for deep networks. Parle requires very infrequent communication with the parameter server and instead performs more computation on each client, which makes it well-suited to both single-machine, multi-GPU settings and distributed implementations.
July 5, 2017 by hgpu