Full-stack Optimization for Accelerating CNNs with FPGA Validation
Harvard University
arXiv:1905.00462 [cs.LG], (1 May 2019)
@misc{mcdanel2019fullstack,
title={Full-stack Optimization for Accelerating CNNs with FPGA Validation},
author={Bradley McDanel and Sai Qian Zhang and H. T. Kung and Xin Dong},
year={2019},
eprint={1905.00462},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
We present a full-stack optimization framework for accelerating inference of CNNs (Convolutional Neural Networks) and validate the approach with field-programmable gate arrays (FPGA) implementations. By jointly optimizing CNN models, computing architectures, and hardware implementations, our full-stack approach achieves unprecedented performance in the trade-off space characterized by inference latency, energy efficiency, hardware utilization and inference accuracy. As a validation vehicle, we have implemented a 170MHz FPGA inference chip achieving 2.28ms latency for the ImageNet benchmark. The achieved latency is among the lowest reported in the literature while achieving comparable accuracy. However, our chip shines in that it has 9x higher energy efficiency compared to other implementations achieving comparable latency. A highlight of our full-stack approach which attributes to the achieved high energy efficiency is an efficient Selector-Accumulator (SAC) architecture for implementing the multiplier-accumulator (MAC) operation present in any digital CNN hardware. For instance, compared to a FPGA implementation for a traditional 8-bit MAC, SAC substantially reduces required hardware resources (4.85x fewer Look-up Tables) and power consumption (2.48x).
May 5, 2019 by hgpu