15425

FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only

Jinhwan Park, Wonyong Sung
Department of Electrical and Computer Engineering, Seoul National University, Seoul 151-744 Korea
arXiv:1602.01616 [cs.AR], (4 Feb 2016)

@article{park2016fpga,

   title={Fpga Based Implementation of Deep Neural Networks Using On-chip Memory Only},

   author={Park, Jinhwan and Sung, Wonyong},

   year={2016},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={cs.AR}

}

Download Download (PDF)   View View   Source Source   

654

views

Deep neural networks (DNNs) demand a very large amount of computation and weight storage, and thus efficient implementation using special purpose hardware is highly desired. In this work, we have developed an FPGA based fixed-point DNN system using only on-chip memory not to access external DRAM. The execution time and energy consumption of the developed system is compared with a GPU based implementation. Since the capacity of memory in FPGA is limited, only 3-bit weights are used for this implementation, and training based fixed-point weight optimization is employed. The implementation using Xilinx XC7Z045 is tested for the MNIST handwritten digit recognition benchmark and a phoneme recognition task on TIMIT corpus. The obtained speed is about one quarter of a GPU based implementation and much better than that of a PC based one. The power consumption is less than 5 Watt at the full speed operation resulting in much higher efficiency compared to GPU based systems.
VN:F [1.9.22_1171]
Rating: 4.0/5 (1 vote cast)
FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only, 4.0 out of 5 based on 1 rating

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: