12403

Increasing Deep Neural Network Acoustic Model Size for Large Vocabulary Continuous Speech Recognition

Andrew L. Maas, Awni Y. Hannun, Christopher T. Lengerich, Peng Qi, Daniel Jurafsky, Andrew Y. Ng
Computer Science Department, Stanford University, CA 94305 USA
arXiv:1406.7806 [cs.CL], (30 Jun 2014)

@{,

}

Download Download (PDF)   View View   Source Source   

2024

views

Deep neural networks (DNNs) are now a central component of nearly all state-of-the-art speech recognition systems. Part of the promise of DNNs is their ability to represent increasingly complex functions as the number of DNN parameters increases. This paper investigates the performance of DNN-based hybrid speech recognition systems as DNN model size and training data increase. Using a distributed GPU architecture, we train DNN acoustic models roughly an order of magnitude larger than those typically found in speech recognition systems. DNNs of this scale achieve substantial reductions in final system word error rate despite training with a loss function not tightly coupled to system error rate. However, training word error rate improvements do not translate to large improvements in test set word error rate for systems trained on the 300 hour Switchboard conversational speech corpus. Scaling DNN acoustic model size does prove beneficial on the Fisher 2,000 hour conversational speech corpus. Our results show that with sufficient training data, increasing DNN model size is an effective, direct path to performance improvements. Moreover, even smaller DNNs benefit from a larger training corpus.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: