17197

A Training Framework and Architectural Design for Distributed Deep Learning

Wei Wang
Department of Computer Science, National University of Singapore
National University of Singapore, 2016

@phdthesis{wei2016training,

   title={A Training Framework and Architectural Design for Distributed Deep Learning},

   author={Wang, Wei},

   year={2016}

}

Deep learning has recently gained a lot of attention on account of its incredible success in many complex data-driven applications, such as image classification. However, deep learning is quite user-hostile and is thus difficult to apply. For example, it is tricky and slow to train a large model which may consume a lot of memory. This thesis introduces our investigations and approaches towards these challenges. First, we have conducted a comprehensive analysis of optimization techniques for deep learning systems, including stand-alone and distributed training. Second, we have designed and developed a distributed deep learning system, named SINGA, which tackles the usability problem and realizes optimization techniques for distributed training. SINGA provides a flexible system architecture for running different distributed training frameworks. Last, we have proposed deep learning based methods for effective multi-modal retrieval on top of SINGA, which outperform state-of-the-art approaches.
VN:F [1.9.22_1171]
Rating: 3.0/5 (2 votes cast)
A Training Framework and Architectural Design for Distributed Deep Learning, 3.0 out of 5 based on 2 ratings

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: