Parallel and Distributed Deep Learning

Vishakh Hegde, Sheema Usmani
Stanford University
Stanford University, 2016


   title={Parallel and Distributed Deep Learning},

   author={Hegde, Vishakh and Usmani, Sheema},



Download Download (PDF)   View View   Source Source   



The goal of this report is to explore ways to parallelize/distribute deep learning in multi-core and distributed setting. We have analyzed (empirically) the speedup in training a CNN using conventional single core CPU and GPU and provide practical suggestions to improve training times. In the distributed setting, we study and analyze synchronous and asynchronous weight update algorithms (like Parallel SGD, ADMM and Downpour SGD) and come up with worst case asymptotic communication cost and computation time for each of the these algorithms.
Rating: 1.0/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: