18167

Fast inference of deep neural networks in FPGAs for particle physics

Javier Duarte, Song Han, Philip Harris, Sergo Jindariani, Edward Kreinar, Benjamin Kreis, Jennifer Ngadiuba, Maurizio Pierini, Nhan Tran, Zhenbin Wu
Fermi National Accelerator Laboratory, Batavia, IL 60510, USA
arXiv:1804.06913 [physics.ins-det], (16 Apr 2018)

@article{duarte2018fast,

   title={Fast inference of deep neural networks in FPGAs for particle physics},

   author={Duarte, Javier and Han, Song and Harris, Philip and Jindariani, Sergo and Kreinar, Edward and Kreis, Benjamin and Ngadiuba, Jennifer and Pierini, Maurizio and Tran, Nhan and Wu, Zhenbin},

   year={2018},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={physics.ins-det}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

931

views

Recent results at the Large Hadron Collider (LHC) have pointed to enhanced physics capabilities through the improvement of the real-time event processing techniques. Machine learning methods are ubiquitous and have proven to be very powerful in LHC physics, and particle physics as a whole. However, exploration of the use of such techniques in low-latency, low-power FPGA hardware has only just begun. FPGA-based trigger and data acquisition (DAQ) systems have extremely low, sub-microsecond latency requirements that are unique to particle physics. We present a case study for neural network inference in FPGAs focusing on a classifier for jet substructure which would enable, among many other physics scenarios, searches for new dark sector particles and novel measurements of the Higgs boson. While we focus on a specific example, the lessons are far-reaching. We develop a package based on High-Level Synthesis (HLS) called hls4ml to build machine learning models in FPGAs. The use of HLS increases accessibility across a broad user community and allows for a drastic decrease in firmware development time. We map out FPGA resource usage and latency versus neural network hyperparameters to identify the problems in particle physics that would benefit from performing neural network inference with FPGAs. For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.
No votes yet.
Please wait...

* * *

* * *

Featured events

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: