Espresso: A Fast End-to-end Neural Speech Recognition Toolkit
Center of Language and Speech Processing, Johns Hopkins University, Baltimore, MD, USA
arXiv:1909.08723
@misc{wang2019espresso,
title={Espresso: A Fast End-to-end Neural Speech Recognition Toolkit},
author={Yiming Wang and Tongfei Chen and Hainan Xu and Shuoyang Ding and Hang Lv and Yiwen Shao and Nanyun Peng and Lei Xie and Shinji Watanabe and Sanjeev Khudanpur},
year={2019},
eprint={1909.08723},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
We present Espresso, an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning library PyTorch and the popular neural machine translation toolkit fairseq. Espresso supports distributed training across GPUs and computing nodes, and features various decoding approaches commonly employed in ASR, including look-ahead word-based language model fusion, for which a fast, parallelized decoder is implemented. Espresso achieves state-of-the-art ASR performance on the WSJ, LibriSpeech, and Switchboard data sets among other end-to-end systems without data augmentation, and is 4–11x faster for decoding than similar systems (e.g. ESPnet).
September 22, 2019 by hgpu