Revisit Long Short-Term Memory: An Optimization Perspective

Qi Lyu, Jun Zhu
State Key Laboratory of Intelligent Technology and Systems (LITS), Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Workshop on Deep Learning and Representation Learning, 2014


   title={Revisit Long Short-Term Memory: An Optimization Perspective},

   author={Lyu, Qi and Zhu, Jun},



Download Download (PDF)   View View   Source Source   



Long Short-Term Memory (LSTM) is a deep recurrent neural network architecture with high computational complexity. Contrary to the standard practice to train LSTM online with stochastic gradient descent (SGD) methods, we propose a matrix-based batch learning method for LSTM with full Backpropagation Through Time (BPTT). We further solve the state drifting issues as well as improving the overall performance for LSTM using revised activation functions for gates. With these changes, advanced optimization algorithms are applied to LSTM with long time dependency for the first time and show great advantages over SGD methods. We further demonstrate that large-scale LSTM training can be greatly accelerated with parallel computation architectures like CUDA and MapReduce.
Rating: 1.7/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2020 hgpu.org

All rights belong to the respective authors

Contact us: