Deep Architectures for Neural Machine Translation
School of Informatics, University of Edinburgh
arXiv:1707.07631 [cs.CL], (24 Jul 2017)
@article{barone2017deep,
title={Deep Architectures for Neural Machine Translation},
author={Barone, Antonio Valerio Miceli and Helcl, Jindrich and Sennrich, Rico and Haddow, Barry and Birch, Alexandra},
year={2017},
month={jul},
archivePrefix={"arXiv"},
primaryClass={cs.CL}
}
It has been shown that increasing model depth improves the quality of neural machine translation. However, different architectural variants to increase model depth have been proposed, and so far, there has been no thorough comparative study.
In this work, we describe and evaluate several existing approaches to introduce depth in neural machine translation. Additionally, we explore novel architectural variants, including deep transition RNNs, and we vary how attention is used in the deep decoder. We introduce a novel “BiDeep” RNN architecture that combines deep transition RNNs and stacked RNNs.
Our evaluation is carried out on the English to German WMT news translation dataset, using a single-GPU machine for both training and inference. We find that several of our proposed architectures improve upon existing approaches in terms of speed and translation quality. We obtain best improvements with a BiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU over a strong shallow baseline.
We release our code for ease of adoption.
In this work, we describe and evaluate several existing approaches to introduce depth in neural machine translation. Additionally, we explore novel architectural variants, including deep transition RNNs, and we vary how attention is used in the deep decoder. We introduce a novel “BiDeep” RNN architecture that combines deep transition RNNs and stacked RNNs.
Our evaluation is carried out on the English to German WMT news translation dataset, using a single-GPU machine for both training and inference. We find that several of our proposed architectures improve upon existing approaches in terms of speed and translation quality. We obtain best improvements with a BiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU over a strong shallow baseline.
We release our code for ease of adoption.
August 1, 2017 by hgpu