15511

DeepSpark: Spark-Based Deep Learning Supporting Asynchronous Updates and Caffe Compatibility

Hanjoo Kim, Jaehong Park, Jaehee Jang, Sungroh Yoon
Electrical Engineering and Computer Science, Seoul National University, Seoul 08826, Korea
arXiv:1602.08191 [cs.LG], (26 Feb 2016)

@article{kim2016deepspark,

   title={DeepSpark: Spark-Based Deep Learning Supporting Asynchronous Updates and Caffe Compatibility},

   author={Kim, Hanjoo and Park, Jaehong and Jang, Jaehee and Yoon, Sungroh},

   year={2016},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

The increasing complexity of deep neural networks (DNNs) has made it challenging to exploit existing large-scale data process pipelines for handling massive data and parameters involved in DNN training. Distributed computing platforms and GPGPU-based acceleration provide a mainstream solution to this computational challenge. In this paper, we propose DeepSpark, a distributed and parallel deep learning framework that simultaneously exploits Apache Spark for large-scale distributed data management and Caffe for GPU-based acceleration. DeepSpark directly accepts Caffe input specifications, providing seamless compatibility with existing designs and network structures. To support parallel operations, DeepSpark automatically distributes workloads and parameters to Caffe-running nodes using Spark and iteratively aggregates training results by a novel lock-free asynchronous variant of the popular elastic averaging stochastic gradient descent (SGD) update scheme, effectively complementing the synchronized processing capabilities of Spark. DeepSpark is an on-going project, and the current release is available.
Rating: 2.0/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: