Efficiency of Parallelization of Neural Network Algorithm on Graphic Cards

Dariusz Konieczny, Karol Radziszewski
Institute of Informatics, Technical University of Wroclaw, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw
Information Systems Architecture and Technology, 2012

   title={Efficiency of Parallelization of Neural Network Algorithm on Graphic Cards},

   author={Konieczny, Dariusz and Radziszewski, Karol},

   journal={Information Systems Architecture and Technology},



Download Download (PDF)   View View   Source Source   



In this paper we are testing the efficiency of parallelization with use of graphic cards. There are many applications where such systems occurs in common, so we choose the domain of artificial neural networks. Actually sold graphic cards gives us strong potential in speeding up calculations and card vendors provide us with even more, giving access to software and documentation, like in CUDA (Compute Unified Device Architecture). But instead of using prepared libraries for algebra, in this work we use the run-time layer of CUDA technology, which gives us more flexibility and almost full control over the hardware. Also, we will show more technical details of implemented algorithms and methods than in other papers regarding this topic. Because of differences in architectures of systems running sequential and parallel versions of applications there was necessity to redefine the original definition of efficiency to compare the heterogeneous systems. We tested our solutions on selected graphics cards with CUDA capability. Input data for neural network which served as benchmark data were global features extracted from histopathological HER-2 images.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477649398
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477649398
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => Iv0YvZJZ7Zo5j1BO7rcUCH14c9A=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2037 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: