15793

TrimZero: A Torch Recurrent Module for Efficient Natural Language Processing

Jin-Hwa Kim, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang
Interdisciplinary Program in Cognitive Science, Seoul National University
Proceedings of KIIS Spring Conference, Vol. 26, No. 1, 2016
@inproceedings{kim2016trimzero,

   title={TrimZero: A Torch Recurrent Module for Efficient Natural Language Processing},

   author={Kim, Jin-Hwa and Kim, Jeonghee and Ha, Jung-Woo and Zhang13, Byoung-Tak},

   booktitle={Proceedings of KIIS Spring Conference},

   volume={26},

   number={1},

   year={2016}

}

Deep learning framework supported by CUDA parallel computing platform boosts advances of studies on machine learning. The advantage of parallel processing largely comes from an efficiency of matrix-matrix multiplication using many CUDA-enabled graphics processing units (GPU). Therefore, for recurrent neural networks (RNNs), the usage of a zero-filled matrix representing variable lengths of sentences for a learning batch is forced for that reason, however, it is still true that these zeros are wasting computational resources. We propose an efficient algorithm which is trimming off zeros in the batch for RNNs providing the same result. The benchmark results validate our method with approximately 25% faster learning. Empirically, a natural language task confirms our results.
VN:F [1.9.22_1171]
Rating: 3.7/5 (3 votes cast)
TrimZero: A Torch Recurrent Module for Efficient Natural Language Processing, 3.7 out of 5 based on 3 ratings

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1480832153
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1480832153
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => ihHysmFXZ4z8ynegb8JFDxJeCDo=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2079 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: