13421

Scaling Recurrent Neural Network Language Models

Will Williams, Niranjani Prasad, David Mrva, Tom Ash, Tony Robinson
Cantab Research, Cambridge, UK
arXiv:1502.00512 [cs.CL], (2 Feb 2015)

@article{williams2015scaling,

   title={Scaling Recurrent Neural Network Language Models},

   author={Williams, Will and Prasad, Niranjani and Mrva, David and Ash, Tom and Robinson, Tony},

   year={2015},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.CL}

}

Download Download (PDF)   View View   Source Source   

1161

views

This paper investigates the scaling properties of Recurrent Neural Network Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and address the questions of how RNNLMs scale with respect to model size, training-set size, computational costs and memory. Our analysis shows that despite being more costly to train, RNNLMs obtain much lower perplexities on standard benchmarks than n-gram models. We train the largest known RNNs and present relative word error rates gains of 18% on an ASR task. We also present the new lowest perplexities on the recently released billion word language modelling benchmark, 1 BLEU point gain on machine translation and a 17% relative hit rate gain in word prediction.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: