8843

Regularization and nonlinearities for neural language models: when are they needed?

Marius Pachitariu, Maneesh Sahani
Gatsby Computational Neuroscience Unit, University College London, UK
arXiv:1301.5650 [stat.ML], (23 Jan 2013)

@article{2013arXiv1301.5650P,

   author={Pachitariu}, M. and {Sahani}, M.},

   title={"{Regularization and nonlinearities for neural language models: when are they needed?}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1301.5650},

   primaryClass={"stat.ML"},

   keywords={Statistics – Machine Learning, Computer Science – Learning},

   year={2013},

   month={jan},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1301.5650P},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1784

views

We show that a recently proposed regularization method called random dropouts works well for language models based on neural networks when little training data is available. Random dropout regularization involves adding a certain kind of noise to the likelihood function being optimized and can be interpreted as a variational approximation to a new class of generative models. We also introduce a simple linear NN model where the nonlinearity of the RNN is removed and we restrict the recurrent matrix to be diagonal. The hidden units in this model compute filtered projections of the input, hence the name FPNN. We show that FPNNs are state-of-the-art language models on the Penn Corpus when properly regularized with random dropouts and column normalization. The nonlinear type of RNN we consider uses rectifier units. Despite their highly nonlinear nature, RNNs are not better language models than FPNNs, even on a large dataset where regularization no longer matters. However, when modelling language at the character-level, FPNNs do not work well, while the RNN trained with stochastic gradient descent achieves similar results to the multiplicative RNN trained with Hessian-Free optimization. GPU training time for the SGD trained RNN is however 50 times less than the training time for HF-trained M-RNN.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: