Introducing CURRENNT – the Munich open-source CUDA RecurREnt Neural Network Toolkit
Machine Learning & Signal Processing Group, MMK, Technische Universitat Munchen, 80290 Munich, Germany
Journal of Machine Learning Research, 15:5, 2014
@article{weninger2014introducing,
title={Introducing CURRENNT–the Munich open-source CUDA RecurREnt Neural Network Toolkit},
author={Weninger, Felix and Bergmann, Johannes and Schuller, Bj{"o}rn},
journal={Journal of Machine Learning Research},
volume={15},
year={2014}
}
In this article, we introduce CURRENNT, an open-source parallel implementation of deep recurrent neural networks (RNNs) supporting graphics processing units (GPUs) through NVIDIA’s Computed Unified Device Architecture (CUDA). CURRENNT supports uni- and bidirectional RNNs with Long Short-Term Memory (LSTM) memory cells which overcome the vanishing gradient problem. To our knowledge, CURRENNT is the first publicly available parallel implementation of deep LSTM-RNNs. Benchmarks are given on a noisy speech recognition task from the 2013 2nd CHiME Speech Separation and Recognition Challenge, where LSTM-RNNs have been shown to deliver best performance. In the result, double digit speedups in bidirectional LSTM training are achieved with respect to a reference single-threaded CPU implementation. CURRENNT is available under the GNU General Public License from http://sourceforge.net/p/currennt.
October 22, 2014 by hgpu