Accelerating CNN on FPGA: An Implementation of MobileNet on FPGA
KTH Royal Institute of Technology
KTH Royal Institute of Technology, 2019
@misc{shen2019accelerating,
title={Accelerating CNN on FPGA: An Implementation of MobileNet on FPGA},
author={Shen, Yulan},
year={2019}
}
Convolutional Neural Network is a deep learning algorithm that brings revolutionary impact on computer vision area. One of its applications is image classification. However, problem exists in this algorithm that it involves huge number of operations and parameters, which limits its possibility in time and resource restricted embedded applications. MobileNet, a neural network that uses separable convolutional layers instead of standard convolutional layers, largely reduces computational consumption compared to traditional CNN models. By implementing MobileNet on FPGA, image classification problems could be largely accelerated. In this thesis, we have designed an accelerator block for MobileNet. We have implemented a simplified MobileNet on Xilinx UltraScale+ Zu104 FPGA board with 64 accelerators. We use the implemented MobileNet to solve a gesture classification problem. The implemented design works under 100MHz frequency. It shows a 28.4x speed up than CPU (Intel(R) Pentium(R) CPU G4560 @ 3.50GHz), and a 6.5x speed up than GPU (NVIDIA GeForce 940MX 1.004GHz). Besides, it is a power efficient design. Its power consumption is 4.07w. The accuracy reaches 43% in gesture classification.
March 1, 2020 by hgpu