16556

Efficient softmax approximation for GPUs

Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, Herve Jegou
Facebook AI Research
arXiv:1609.04309 [cs.CL], (14 Sep 2016)

@article{grave2016efficient,

   title={Efficient softmax approximation for GPUs},

   author={Grave, Edouard and Joulin, Armand and Cisse, Moustapha and Grangier, David and Jegou, Herve},

   year={2016},

   month={sep},

   archivePrefix={"arXiv"},

   primaryClass={cs.CL}

}

Download Download (PDF)   View View   Source Source   

2779

views

We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computational complexity. Our approach further reduces the computational cost by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax.
Rating: 0.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: