19112

Neural Network Inference on Mobile SoCs

Siqi Wang, Anuj Pathania, Tulika Mitra
Department of Computer Science, School of Computing, National University of Singapore, SG
arXiv:1908.11450 [cs.LG], (24 Aug 2019)

@misc{wang2019neural,

   title={Neural Network Inference on Mobile SoCs},

   author={Siqi Wang and Anuj Pathania and Tulika Mitra},

   year={2019},

   eprint={1908.11450},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

1696

views

The ever-increasing demand from mobile Machine Learning (ML) applications calls for evermore powerful on-chip computing resources. Mobile devices are empowered with Heterogeneous Multi-Processor Systems on Chips (HMPSoCs) to process ML workloads such as Convolutional Neural Network (CNN) inference. HMPSoCs house several different types of ML capable components on-die, such as CPU, GPU, and accelerators. These different components are capable of independently performing inference but with very different power-performance characteristics. In this article, we provide a quantitative evaluation of the inference capabilities of the different components on HMPSoCs. We also present insights behind their respective power-performance behaviour. Finally, we explore the performance limit of the HMPSoCs by synergistically engaging all the components concurrently.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: