15424

EIE: Efficient Inference Engine on Compressed Deep Neural Network

Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally
Stanford University
arXiv:1602.01528 [cs.CV], (4 Feb 2016)

@article{han2016efficient,

   title={EIE: Efficient Inference Engine on Compressed Deep Neural Network},

   author={Han, Song and Liu, Xingyu and Mao, Huizi and Pu, Jing and Pedram, Ardavan and Horowitz, Mark A. and Dally, William J.},

   year={2016},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.CV}

}

Download Download (PDF)   View View   Source Source   

2672

views

State-of-the art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware can help the computation, fetching the weights from DRAM can be as much as two orders of magnitude more expansive than ALU operation, and dominates the required power. Previously proposed compression makes it possible to fit state-of-the-art DNNs (AlexNet with 60 million parameters, VGG-16 with 130 million parameters) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the inherent modified sparse matrix-vector multiplication. Evaluated on nine DNN benchmarks, EIE is 189x and 13x faster when compared to CPU and GPU implementations of the DNN without compression. EIE with processing power of 102 GOPS at only 600mW is also 24,000x and 3,000x more energy efficient than a CPU and GPU respectively. The EIE resulted into no loss of accuracy on AlexNet and VGG-16 outputs on the ImageNet dataset, which represents the state-of-the-art model and the largest computer vision benchmark.
Rating: 2.7/5. From 3 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: