A Survey of FPGA Based Deep Learning Accelerators: Challenges and Opportunities
School of Software Engineering of USTC, Suzhou, China
arXiv:1901.04988 [cs.DC], (25 Dec 2018)
@article{wang2019survey,
title={A Survey of FPGA Based Deep Learning Accelerators: Challenges and Opportunities},
author={Wang, Teng and Wang, Chao and Zhou, Xuehai and Chen, Huaping},
year={2019},
month={jan},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
With the rapid development of in-depth learning, neural network and deep learning algorithms have been widely used in various fields, e.g., image, video and voice processing. However, the neural network model is getting larger and larger, which is expressed in the calculation of model parameters. Although a wealth of existing efforts on GPU platforms currently used by researchers for improving computing performance, dedicated hardware solutions are essential and emerging to provide advantages over pure software solutions. In this paper, we systematically investigate the neural network accelerator based on FPGA. Specifically, we respectively review the accelerators designed for specific problems, specific algorithms, algorithm features, and general templates. We also compared the design and implementation of the accelerator based on FPGA under different devices and network models and compared it with the versions of CPU and GPU. Finally, we present to discuss the advantages and disadvantages of accelerators on FPGA platforms and to further explore the opportunities for future research.
January 20, 2019 by hgpu