17533

Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization

Viacheslav Khomenko, Oleg Shyshkov, Olga Radyvonenko, Kostiantyn Bokhan
Samsung RnD Institute Ukraine (SRK), 57, L’va Tolstogo Str., Kyiv, 01032, Ukraine
arXiv:1708.05604 [cs.LG], (18 Aug 2017)

@article{khomenko2017accelerating,

   title={Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization},

   author={Khomenko, Viacheslav},

   year={2017},

   month={aug},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG},

   doi={10.1109/DSMP.2016.7583516}

}

Download Download (PDF)   View View   Source Source   

3402

views

An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucketing is compared with the proposed solution for a different number of buckets. An example is given for the online handwriting recognition task using an LSTM recurrent neural network. The evaluation is performed in terms of the wall clock time, number of epochs, and validation loss value.
Rating: 1.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: