Neural Network Inference on Mobile SoCs
Department of Computer Science, School of Computing, National University of Singapore, SG
arXiv:1908.11450 [cs.LG], (24 Aug 2019)
@misc{wang2019neural,
title={Neural Network Inference on Mobile SoCs},
author={Siqi Wang and Anuj Pathania and Tulika Mitra},
year={2019},
eprint={1908.11450},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
The ever-increasing demand from mobile Machine Learning (ML) applications calls for evermore powerful on-chip computing resources. Mobile devices are empowered with Heterogeneous Multi-Processor Systems on Chips (HMPSoCs) to process ML workloads such as Convolutional Neural Network (CNN) inference. HMPSoCs house several different types of ML capable components on-die, such as CPU, GPU, and accelerators. These different components are capable of independently performing inference but with very different power-performance characteristics. In this article, we provide a quantitative evaluation of the inference capabilities of the different components on HMPSoCs. We also present insights behind their respective power-performance behaviour. Finally, we explore the performance limit of the HMPSoCs by synergistically engaging all the components concurrently.
September 8, 2019 by hgpu
Your response
You must be logged in to post a comment.