10862

Many-core applications to online track reconstruction in HEP experiments

S. Amerio, D. Bastieri, M. Corvo, A. Gianelle, W. Ketchum, T. Liu, A. Lonardo, D. Lucchesi, S. Poprocki, R. Rivera, L. Tosoratto, P. Vicini, P. Wittich
INFN and University of Padova, Padova, Italy
arXiv:1311.0380 [physics.ins-det], (2 Nov 2013)

@article{2013arXiv1311.0380A,

   author={Amerio}, S. and {Bastieri}, D. and {Corvo}, M. and {Gianelle}, A. and {Ketchum}, W. and {Liu}, T. and {Lonardo}, A. and {Lucchesi}, D. and {Poprocki}, S. and {Rivera}, R. and {Tosoratto}, L. and {Vicini}, P. and {Wittich}, P.},

   title={"{Many-core applications to online track reconstruction in HEP experiments}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1311.0380},

   primaryClass={"physics.ins-det"},

   keywords={Physics – Instrumentation and Detectors, Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2013},

   month={nov},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1311.0380A},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1724

views

Interest in parallel architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core architecture (MIC) when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction – the SVT algorithm – as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the parallel devices.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: