22355

GPU coprocessors as a service for deep learning inference in high energy physics

Jeffrey Krupa, Kelvin Lin, Maria Acosta Flechas, Jack Dinsmore, Javier Duarte, Philip Harris, Scott Hauck, Burt Holzman, Shih-Chieh Hsu, Thomas Klijnsma, Mia Liu, Kevin Pedro, Natchanon Suaysom, Matt Trahms, Nhan Tran
Massachusetts Institute of Technology, Cambridge, MA 02139
arXiv:2007.10359 [physics.comp-ph], (20 Jul 2020)

@misc{krupa2020gpu,

   title={GPU coprocessors as a service for deep learning inference in high energy physics},

   author={Jeffrey Krupa and Kelvin Lin and Maria Acosta Flechas and Jack Dinsmore and Javier Duarte and Philip Harris and Scott Hauck and Burt Holzman and Shih-Chieh Hsu and Thomas Klijnsma and Mia Liu and Kevin Pedro and Natchanon Suaysom and Matt Trahms and Nhan Tran},

   year={2020},

   eprint={2007.10359},

   archivePrefix={arXiv},

   primaryClass={physics.comp-ph}

}

In the next decade, the demands for computing in large scientific experiments are expected to grow tremendously. During the same time period, CPU performance increases will be limited. At the CERN Large Hadron Collider (LHC), these two issues will confront one another as the collider is upgraded for high luminosity running. Alternative processors such as graphics processing units (GPUs) can resolve this confrontation provided that algorithms can be sufficiently accelerated. In many cases, algorithmic speedups are found to be largest through the adoption of deep learning algorithms. We present a comprehensive exploration of the use of GPU-based hardware acceleration for deep learning inference within the data reconstruction workflow of high energy physics. We present several realistic examples and discuss a strategy for the seamless integration of coprocessors so that the LHC can maintain, if not exceed, its current performance throughout its running.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: