Accelerating Deep Learning Inference with Cross-Layer Data Reuse on GPUs
State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, China
arXiv:2007.06000 [cs.DC], (12 Jul 2020)
@misc{wang2020accelerating,
title={Accelerating Deep Learning Inference with Cross-Layer Data Reuse on GPUs},
author={Xueying Wang and Guangli Li and Xiao Dong and Jiansong Li and Lei Liu and Xiaobing Feng},
year={2020},
eprint={2007.06000},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
Accelerating the deep learning inference is very important for real-time applications. In this paper, we propose a novel method to fuse the layers of convolutional neural networks (CNNs) on Graphics Processing Units (GPUs), which applies data reuse analysis and access optimization in different levels of the memory hierarchy. To achieve the balance between computation and memory access, we explore the fusion opportunities in the CNN computation graph and propose three fusion modes of convolutional neural networks: straight, merge and split. Then, an approach for generating efficient fused code is designed, which goes deeper in multi-level memory usage for cross-layer data reuse. The effectiveness of our method is evaluated with the network layers from state-of-the-art CNNs on two different GPU platforms, NVIDIA TITAN Xp and Tesla P4. The experiments show that the average speedup is 2.02x on representative structures of CNNs, and 1.57x on end-to-end inference of SqueezeNet.
July 19, 2020 by hgpu