Efficient softmax approximation for GPUs

Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Herve Jegou
Facebook AI Research
arXiv:1609.04309 [cs.CL], (14 Sep 2016)


   title={Efficient softmax approximation for GPUs},

   author={Grave, Edouard and Joulin, Armand and Cisse, Moustapha and Grangier, David and Jegou, Herve},






Download Download (PDF)   View View   Source Source   



We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computational complexity. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax.
Rating: 0.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: