12342

Efficient GPU-based Training of Recurrent Neural Network Language Models Using Spliced Sentence Bunch

X. Chen, Y. Wang, X. Liu, M.J.F. Gales, P. C. Woodland
University of Cambridge Engineering Dept, Trumpington St., Cambridge, CB2 1PZ, U.K.
INTERSPEECH, 2014

@article{chen2014efficient,

   title={Efficient GPU-based Training of Recurrent Neural Network Language Models Using Spliced Sentence Bunch},

   author={Chen, X. and Wang, Y. and Liu, X. and Gales, M.J.F. and Woodland, P. C.},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1967

views

Recurrent neural network language models (RNNLMs) are becoming increasingly popular for a range of applications including speech recognition. However, an important issue that limits the quantity of data, and hence their possible application areas, is the computational cost in training. A standard approach to handle this problem is to use class-based outputs, allowing systems to be trained on CPUs. This paper describes an alternative approach that allows RNNLMs to be efficiently trained on GPUs. This enables larger quantities of data to be used, and networks with an unclustered, full output layer to be trained. To improve efficiency on GPUs, multiple sentences are "spliced" together for each mini-batch or "bunch" in training. On a large vocabulary conversational telephone speech recognition task, the training time was reduced by a factor of 27 over the standard CPU-based RNNLM toolkit. The use of an unclustered, full output layer also improves perplexity and recognition performance over class-based RNNLMs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: