15652

Are Very Deep Neural Networks Feasible on Mobile Devices?

S. Rallapalli, H. Qiu, A. J. Bency, S. Karthikeyan, R. Govindan, B.S.Manjunath, R. Urgaonkar
USC University of Southern California
University of Southern California, Technical report 16-965, 2016

@article{rallapalli2016very,

   title={Are Very Deep Neural Networks Feasible on Mobile Devices?},

   author={Rallapalli, S and Qiu, H and Bency, AJ and Karthikeyan, S and Govindan, R and Manjunath, BS and Urgaonkar, R},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

384

views

In the recent years, the computing power of mobile devices has increased tremendously, a trend that is expected to continue in the future. With high-quality onboard cameras, these devices are capable of collecting large volumes of visual information. Motivated by the observation that processing this video on the mobile device can enable many new applications, we explore the feasibility of running very deep Convolutional Neural Networks (CNNs) for video processing tasks on an emerging class of mobile platforms with embedded GPUs. We find that the memory available in these mobile GPUs is significantly less than necessary to execute very deep CNNs. We then quantify the performance of several deep CNNspecific memory management techniques, some of which leverage the observation that these CNNs have a small number of layers that require most of the memory. We find that a particularly novel approach that offloads these bottleneck layers to the mobile device’s CPU and pipelines frame processing is a promising approach that does not impact the accuracy of these tasks. We conclude by arguing that such techniques will likely be necessary for the foreseeable future despite technological improvements.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1481373498
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1481373498
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => tXNtGaeSRV9UUjfHkF7368vB9Lo=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2081 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: