17197

A Training Framework and Architectural Design for Distributed Deep Learning

Wei Wang
Department of Computer Science, National University of Singapore
National University of Singapore, 2016

@phdthesis{wei2016training,

   title={A Training Framework and Architectural Design for Distributed Deep Learning},

   author={Wang, Wei},

   year={2016}

}

Deep learning has recently gained a lot of attention on account of its incredible success in many complex data-driven applications, such as image classification. However, deep learning is quite user-hostile and is thus difficult to apply. For example, it is tricky and slow to train a large model which may consume a lot of memory. This thesis introduces our investigations and approaches towards these challenges. First, we have conducted a comprehensive analysis of optimization techniques for deep learning systems, including stand-alone and distributed training. Second, we have designed and developed a distributed deep learning system, named SINGA, which tackles the usability problem and realizes optimization techniques for distributed training. SINGA provides a flexible system architecture for running different distributed training frameworks. Last, we have proposed deep learning based methods for effective multi-modal retrieval on top of SINGA, which outperform state-of-the-art approaches.
Rating: 1.5/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: