24776

De-specializing an HLS library for Deep Neural Networks: improvements upon hls4ml

Serena Curzel, Nicolò Ghielmetti, Michele Fiorito, Fabrizio Ferrandi
Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy
arXiv:2103.13060 [cs.AR], (24 Mar 2021)

@misc{curzel2021despecializing,

   title={De-specializing an HLS library for Deep Neural Networks: improvements upon hls4ml},

   author={Serena Curzel and Nicolò Ghielmetti and Michele Fiorito and Fabrizio Ferrandi},

   year={2021},

   eprint={2103.13060},

   archivePrefix={arXiv},

   primaryClass={cs.AR}

}

Download Download (PDF)   View View   Source Source   

1376

views

Custom hardware accelerators for Deep Neural Networks are increasingly popular: in fact, the flexibility and performance offered by FPGAs are well-suited to the computational effort and low latency constraints required by many image recognition and natural language processing tasks. The gap between high-level Machine Learning frameworks (e.g., Tensorflow, Pytorch) and low-level hardware design in Verilog/VHDL creates a barrier to widespread adoption of FPGAs, which can be overcome with the help of High-Level Synthesis. hls4ml is a framework that translates Deep Neural Networks into annotated C++ code for High-Level Synthesis, offering a complete and user-friendly design process that has been enthusiastically adopted in physics research. We analyze the strengths and weaknesses of hls4ml, drafting a plan to enhance its core library of components in order to allow more advanced optimizations, target a wider selection of FPGAs, and support larger Neural Network models.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: