Revisit Long Short-Term Memory: An Optimization Perspective
State Key Laboratory of Intelligent Technology and Systems (LITS), Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Workshop on Deep Learning and Representation Learning, 2014
@article{lyu2014revisit,
title={Revisit Long Short-Term Memory: An Optimization Perspective},
author={Lyu, Qi and Zhu, Jun},
year={2014}
}
Long Short-Term Memory (LSTM) is a deep recurrent neural network architecture with high computational complexity. Contrary to the standard practice to train LSTM online with stochastic gradient descent (SGD) methods, we propose a matrix-based batch learning method for LSTM with full Backpropagation Through Time (BPTT). We further solve the state drifting issues as well as improving the overall performance for LSTM using revised activation functions for gates. With these changes, advanced optimization algorithms are applied to LSTM with long time dependency for the first time and show great advantages over SGD methods. We further demonstrate that large-scale LSTM training can be greatly accelerated with parallel computation architectures like CUDA and MapReduce.
January 23, 2015 by hgpu