SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing
Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, NY, USA
arXiv:1611.05939 [cs.CV], (18 Nov 2016)
@article{ren2016scdcnn,
title={SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Computing},
author={Ren, Ao and Li, Ji and Li, Zhe and Ding, Caiwen and Qian, Xuehai and Qiu, Qinru and Yuan, Bo and Wang, Yanzhi},
year={2016},
month={nov},
archivePrefix={"arXiv"},
primaryClass={cs.CV}
}
With recent advancing of Internet of Things (IoTs), it becomes very attractive to implement the deep convolutional neural networks (DCNNs) onto embedded/portable systems. Presently, executing the software-based DCNNs requires high-performance server clusters in practice, restricting their widespread deployment on the mobile devices. To overcome this issue, considerable research efforts have been conducted in the context of developing highly-parallel and specific DCNN hardware, utilizing GPGPUs, FPGAs, and ASICs. Stochastic Computing (SC), which uses bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has a high potential for implementing DCNNs with high scalability and ultra-low hardware footprint. Since multiplications and additions can be calculated using AND gates and multiplexers in SC, significant reductions in power/energy and hardware footprint can be achieved compared to the conventional binary arithmetic implementations. The tremendous savings in power (energy) and hardware resources bring about immense design space for enhancing scalability and robustness for hardware DCNNs. This paper presents the first comprehensive design and optimization framework of SC-based DCNNs (SC-DCNNs). We first present the optimal designs of function blocks that perform the basic operations, i.e., inner product, pooling, and activation function. Then we propose the optimal design of four types of combinations of basic function blocks, named feature extraction blocks, which are in charge of extracting features from input feature maps. Besides, weight storage methods are investigated to reduce the area and power/energy consumption for storing weights. Finally, the whole SC-DCNN implementation is optimized, with feature extraction blocks carefully selected, to minimize area and power/energy consumption while maintaining a high network accuracy level.
November 22, 2016 by hgpu