16986

Accelerating Binarized Neural Networks: Comparison of FPGA, CPU, GPU, and ASIC

Eriko Nurvitadhi, David Sheffield, Jaewoong Sim, Asit Mishra, Ganesh Venkatesh, Debbie Marr
Accelerator Architecture Lab, Intel Corporation
International Conference on Field-Programmable Technology (FPT), 2016

@article{nurvitadhi2016accelerating,

   title={Accelerating Binarized Neural Networks: Comparison of FPGA, CPU, GPU, and ASIC},

   author={Nurvitadhi, Eriko and Sheffield, David and Sim, Jaewoong and Mishra, Asit and Venkatesh, Ganesh and Marr, Debbie},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

2601

views

Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or -1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: