Modeling the Resource Requirements of Convolutional Neural Networks on Mobile Devices
Peking University
arXiv:1709.09503 [cs.CV], (27 Sep 2017)
@article{lu2017modeling,
title={Modeling the Resource Requirements of Convolutional Neural Networks on Mobile Devices},
author={Lu, Zongqing and Rallapalli, Swati and Chan, Kevin and Porta, Thomas La},
year={2017},
month={sep},
archivePrefix={"arXiv"},
primaryClass={cs.CV},
doi={10.1145/3123266.3123389}
}
Convolutional Neural Networks (CNNs) have revolutionized the research in computer vision, due to their ability to capture complex patterns, resulting in high inference accuracies. However, the increasingly complex nature of these neural networks means that they are particularly suited for server computers with powerful GPUs. We envision that deep learning applications will be eventually and widely deployed on mobile devices, e.g., smartphones, self-driving cars, and drones. Therefore, in this paper, we aim to understand the resource requirements (time, memory) of CNNs on mobile devices. First, by deploying several popular CNNs on mobile CPUs and GPUs, we measure and analyze the performance and resource usage for every layer of the CNNs. Our findings point out the potential ways of optimizing the performance on mobile devices. Second, we model the resource requirements of the different CNN computations. Finally, based on the measurement, pro ling, and modeling, we build and evaluate our modeling tool, Augur, which takes a CNN configuration (descriptor) as the input and estimates the compute time and resource usage of the CNN, to give insights about whether and how e ciently a CNN can be run on a given mobile platform. In doing so Augur tackles several challenges: (i) how to overcome pro ling and measurement overhead; (ii) how to capture the variance in different mobile platforms with different processors, memory, and cache sizes; and (iii) how to account for the variance in the number, type and size of layers of the different CNN configurations.
September 28, 2017 by hgpu