17130

Unfolding and Shrinking Neural Machine Translation Ensembles

Felix Stahlberg, Bill Byrne
Department of Engineering, University of Cambridge, UK
arXiv:1704.03279 [cs.CL], (11 Apr 2017)

@article{stahlberg2017unfolding,

   title={Unfolding and Shrinking Neural Machine Translation Ensembles},

   author={Stahlberg, Felix and Byrne, Bill},

   year={2017},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.CL}

}

Ensembling is a well-known technique in neural machine translation (NMT). Instead of a single neural net, multiple neural nets with the same topology are trained separately, and the decoder generates predictions by averaging over the individual models. Ensembling often improves the quality of the generated translations drastically. However, it is not suitable for production systems because it is cumbersome and slow. This work aims to reduce the runtime to be on par with a single system without compromising the translation quality. First, we show that the ensemble can be unfolded into a single large neural network which imitates the output of the ensemble system. We show that unfolding can already improve the runtime in practice since more work can be done on the GPU. We proceed by describing a set of techniques to shrink the unfolded network by reducing the dimensionality of layers. On Japanese-English we report that the resulting network has the size and decoding speed of a single NMT network but performs on the level of a 3-ensemble system.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: