20485

MNN: A Universal and Efficient Inference Engine

Xiaotang Jiang, Huan Wang, Yiliu Chen, Ziqi Wu, Lichuan Wang, Bin Zou, Yafeng Yang, Zongyang Cui, Yu Cai, Tianhang Yu, Chengfei Lv, Zhihua Wu
Alibaba Group, Hangzhou, China
arXiv:2002.12418 [cs.CV], (27 Feb 2020)

@misc{jiang2020mnn,

   title={MNN: A Universal and Efficient Inference Engine},

   author={Xiaotang Jiang and Huan Wang and Yiliu Chen and Ziqi Wu and Lichuan Wang and Bin Zou and Yafeng Yang and Zongyang Cui and Yu Cai and Tianhang Yu and Chengfei Lv and Zhihua Wu},

   year={2020},

   eprint={2002.12418},

   archivePrefix={arXiv},

   primaryClass={cs.CV}

}

Deploying deep learning models on mobile devices draws more and more attention recently. However, designing an efficient inference engine on devices is under the great challenges of model compatibility, device diversity, and resource limitation. To deal with these challenges, we propose Mobile Neural Network (MNN), a universal and efficient inference engine tailored to mobile applications. In this paper, the contributions of MNN include: (1) presenting a mechanism called pre-inference that manages to conduct runtime optimization; (2)deliveringthorough kernel optimization on operators to achieve optimal computation performance; (3) introducing backend abstraction module which enables hybrid scheduling and keeps the engine lightweight. Extensive benchmark experiments demonstrate that MNN performs favorably against other popular lightweight deep learning frameworks. MNN is available to public.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: