Software-Defined FPGA Accelerator Design for Mobile Deep Learning Applications
Division of Electronics and Computer Engineering, Department of Electrical and Computer Engineering, Faculty of Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
arXiv:1902.03192 [cs.CV], (8 Feb 2019)
@misc{mousouliotis2019softwaredefined,
title={Software-Defined FPGA Accelerator Design for Mobile Deep Learning Applications},
author={Panagiotis G. Mousouliotis and Loukas P. Petrou},
year={2019},
eprint={1902.03192},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Recently, the field of deep learning has received great attention by the scientific community and it is used to provide improved solutions to many computer vision problems. Convolutional neural networks (CNNs) have been successfully used to attack problems such as object recognition, object detection, semantic segmentation, and scene understanding. The rapid development of deep learning goes hand by hand with the adaptation of GPUs for accelerating its processes, such as network training and inference. Even though FPGA design exists long before the use of GPUs for accelerating computations and despite the fact that high-level synthesis (HLS) tools are getting more attractive, the adaptation of FPGAs for deep learning research and application development is poor due to the requirement of hardware design related expertise. This work presents a workflow for deep learning mobile application acceleration on small low-cost low-power FPGA devices using HLS tools. This workflow eases the design of an improved version of the SqueezeJet accelerator used for the speedup of mobile-friendly low-parameter ImageNet class CNNs, such as the SqueezeNet v1.1 and the ZynqNet. Additionally, the workflow includes the development of an HLS-driven analytical model which is used for performance estimation of the accelerator. This model can be also used to direct the design process and lead to future design improvements and optimizations.
February 17, 2019 by hgpu