29031

A Heterogeneous Inference Framework for a Deep Neural Network

Rafael Gadea-Gironés, José Luís Rocabado-Rocha, Jorge Fe, Jose M. Monzo
Institute for Molecular Imaging Technologies (I3M), Universitat Politècnica de València, 46022 Valencia, Spain;
Electronics, Volume 13, Issue 2, 2024

@article{gadea2024heterogeneous,

   title={A Heterogeneous Inference Framework for a Deep Neural Network},

   author={Gadea-Giron{‘e}s, Rafael and Rocabado-Rocha, Jos{‘e} Lu{‘i}s and Fe, Jorge and Monzo, Jose M},

   journal={Electronics},

   volume={13},

   number={2},

   pages={348},

   year={2024},

   publisher={MDPI}

}

Download Download (PDF)   View View   Source Source   

379

views

Artificial intelligence (AI) is one of the most promising technologies based on machine learning algorithms. In this paper, we propose a workflow for the implementation of deep neural networks. This workflow attempts to combine the flexibility of high-level compilers (HLS)-based networks with the architectural control features of hardware description languages (HDL)-based flows. The architecture consists of a convolutional neural network, SqueezeNet v1.1, and a hard processor system (HPS) that coexists with acceleration hardware to be designed. This methodology allows us to compare solutions based solely on software (PyTorch 1.13.1) and propose heterogeneous inference solutions, taking advantage of the best options within the software and hardware flow. The proposed workflow is implemented on a low-cost field programmable gate array system-on-chip (FPGA SOC) platform, specifically the DE10-Nano development board. We have provided systolic architectural solutions written in OpenCL that are highly flexible and easily tunable to take full advantage of the resources of programmable devices and achieve superior energy efficiencies working with a 32-bit floating point. From a verification point of view, the proposed method is effective, since the reference models in all tests, both for the individual layers and the complete network, have been readily available using packages well known in the development, training, and inference of deep networks.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: