A Power Efficient Neural Network Implementation on Heterogeneous FPGA and GPU Devices

Yuexuan Tu, Saad Sadiq, Yudong Tao, Mei-Ling Shyu, Shu-Ching Chen
Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
The 20th IEEE International Conference on Information Reuse and Integration for Data Science (IEEE IRI 2019), 2019


   title={A Power Efficient Neural Network Implementation on Heterogeneous FPGA and GPU Devices},

   author={Tu, Yuexuan and Sadiq, Saad and Tao, Yudong and Shyu, Mei-Ling and Chen, Shu-Ching},

   journal={Power (Watt)},






Download Download (PDF)   View View   Source Source   



Deep neural networks (DNNs) have seen tremendous industrial successes in various applications, including image recognition, machine translation, audio processing, etc. However, they require massive amounts of computations and take a lot of time to process. This quickly becomes a problem in mobile and handheld devices where real-time multimedia applications such as face detection, disaster management, and CCTV require lightweight, fast, and effective computing solutions. The objective of this project is to utilize specialized devices such as Field Programmable Gate Arrays (FPGAs) and Graphics Processing Units (GPUs) in a heterogeneous computing environment to accelerate the deep learning computations with the constraints of power efficiency. We investigate an efficient DNN implementation and make use of FPGA for fully-connected layer and GPU for floating-point operations. This requires the deep neural network architecture to be implemented in a model parallelism system where the DNN model is broken down and processed in a distributed fashion. The proposed heterogeneous framework idea is implemented using an Nvidia TX2 GPU and a Xilinx Artix-7 FPGA. Experimental results indicate that the proposed framework can achieve faster computation and much lower power consumption.
Rating: 1.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2019 hgpu.org

All rights belong to the respective authors

Contact us: