15073

Deep Speech 2: End-to-End Speech Recognition in English and Mandarin

Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu
Baidu Research, Silicon Valley AI Lab
arXiv:1512.02595 [cs.CL], (8 Dec 2015)

@article{amodei2015deep,

   title={Deep Speech 2: End-to-End Speech Recognition in English and Mandarin},

   author={Amodei, Dario and Anubhai, Rishita and Battenberg, Eric and Case, Carl and Casper, Jared and Catanzaro, Bryan and Chen, Jingdong and Chrzanowski, Mike and Coates, Adam and Diamos, Greg and Elsen, Erich and Engel, Jesse and Fan, Linxi and Fougner, Christopher and Han, Tony and Hannun, Awni and Jun, Billy and LeGresley, Patrick and Lin, Libby and Narang, Sharan and Ng, Andrew and Ozair, Sherjil and Prenger, Ryan and Raiman, Jonathan and Satheesh, Sanjeev and Seetapun, David and Sengupta, Shubho and Wang, Yi and Wang, Zhiqian and Wang, Chong and Xiao, Bo and Yogatama, Dani and Zhan, Jun and Zhu, Zhenyao},

   year={2015},

   month={dec},

   archivePrefix={"arXiv"},

   primaryClass={cs.CL}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

2241

views

We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech–two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.
Rating: 2.0/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: