11829

Exploring the power of GPU’s for training Deep Belief Networks

Vivek Kulkarni
Department of Computer Science, Stony Brook University
arXiv:1404.1521 [cs.LG], (5 Apr 2014)

@article{2014arXiv1404.1521K,

   author={Kulkarni}, V.},

   title={"{Exploring the power of GPU’s for training Deep Belief Networks}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1404.1521},

   primaryClass={"cs.LG"},

   keywords={Computer Science – Learning, Computer Science – Computation and Language},

   year={2014},

   month={apr},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1404.1521K},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1582

views

One of the major research trends currently is the evolution of heterogeneous parallel computing. GP-GPU computing is being widely used and several applications have been designed to exploit the massive parallelism that GP-GPU’s have to offer. While GPU’s have always been widely used in areas of computer vision for image processing, little has been done to investigate whether the massive parallelism provided by GP-GPU’s can be utilized effectively for Natural Language Processing(NLP) tasks. In this work, we investigate and explore the power of GP-GPU’s in the task of learning language models. More specifically, we investigate the performance of training a language model represented by a deep belief neural network. We evaluate the performance of training the model on the GPU and present some interesting optimizations that boost the performance on the GPU. One of the key optimizations, we propose increases the performance of a function involved in calculating and updating the gradient by approximately 50 times on the GPU for sufficiently large batch sizes. We show that with the above optimizations, the GP-GPU’s performance on the task increases by factor of approximately 3-4. We also show that these optimizations result in the GPU’s performance at this task being now comparable to that on the CPU. We conclude by presenting a thorough evaluation of the applicability of GP-GPU’s for this task and highlight the factors limiting the performance of the language model on the GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: