Applications of Many-Core Technologies to On-line Event Reconstruction in High Energy Physics Experiments

A. Gianelle, S. Amerio, D. Bastieri, M. Corvo, W. Ketchum, T. Liu, A. Lonardo, D. Lucchesi, S. Poprocki, R. Rivera, L. Tosoratto, P. Vicini, P. Wittich
University of Padova
arXiv:1312.0917 [physics.ins-det], (4 Dec 2013)


   author={Gianelle}, A. and {Amerio}, S. and {Bastieri}, D. and {Corvo}, M. and {Ketchum}, W. and {Liu}, T. and {Lonardo}, A. and {Lucchesi}, D. and {Poprocki}, S. and {Rivera}, R. and {Tosoratto}, L. and {Vicini}, P. and {Wittich}, P.},

   title={"{Applications of Many-Core Technologies to On-line Event Reconstruction in High Energy Physics Experiments}"},

   journal={ArXiv e-prints},




   keywords={Physics – Instrumentation and Detectors, Computer Science – Distributed, Parallel, and Cluster Computing, High Energy Physics – Experiment},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   



Interest in many-core architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of many-core devices when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction – the SVT algorithm – as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. We measure performance of different architectures (Intel Xeon Phi and AMD GPUs, in addition to NVidia GPUs) and different software environments (OpenCL, in addition to NVidia CUDA). Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the many-core devices.
No votes yet.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: