13769

Single stream parallelization of generalized LSTM-like RNNs on a GPU

Kyuyeon Hwang, Wonyong Sung
Department of Electrical and Computer Engineering, Seoul National University, Seoul 151-744, South Korea
arXiv:1503.02852 [cs.NE], (10 Mar 2015)

@article{hwang2015single,

   title={Single stream parallelization of generalized LSTM-like RNNs on a GPU},

   author={Hwang, Kyuyeon and Sung, Wonyong},

   year={2015},

   month={mar},

   archivePrefix={"arXiv"},

   primaryClass={cs.NE}

}

Download Download (PDF)   View View   Source Source   

2227

views

Recurrent neural networks (RNNs) have shown outstanding performance on processing sequence data. However, they suffer from long training time, which demands parallel implementations of the training procedure. Parallelization of the training algorithms for RNNs are very challenging because internal recurrent paths form dependencies between two different time frames. In this paper, we first propose a generalized graph-based RNN structure that covers the most popular long short-term memory (LSTM) network. Then, we present a parallelization approach that automatically explores parallelisms of arbitrary RNNs by analyzing the graph structure. The experimental results show that the proposed approach shows great speed-up even with a single training stream, and further accelerates the training when combined with multiple parallel training streams.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: