Physics and Computing Performance of the Exa.TrkX TrackML Pipeline

Xiangyang Ju, Daniel Murnane, Paolo Calafiura, Nicholas Choma, Sean Conlon, Steve Farrell, Yaoyuan Xu, Maria Spiropulu, Jean-Roch Vlimant, Adam Aurisano, Jeremy Hewes, Giuseppe Cerati, Lindsey Gray, Thomas Klijnsma, Jim Kowalkowski, Markus Atkinson, Mark Neubauer, Gage DeZoort, Savannah Thais, Aditi Chauhan, Alex Schuy, Shih-Chieh Hsu, Alex Ballow, Alina Lazar
Lawrence Berkeley National Laboratory
arXiv:2103.06995 [hep-ex], (11 Mar 2021)


   title={Physics and Computing Performance of the Exa.TrkX TrackML Pipeline},

   author={Xiangyang Ju and Daniel Murnane and Paolo Calafiura and Nicholas Choma and Sean Conlon and Steve Farrell and Yaoyuan Xu and Maria Spiropulu and Jean-Roch Vlimant and Adam Aurisano and Jeremy Hewes and Giuseppe Cerati and Lindsey Gray and Thomas Klijnsma and Jim Kowalkowski and Markus Atkinson and Mark Neubauer and Gage DeZoort and Savannah Thais and Aditi Chauhan and Alex Schuy and Shih-Chieh Hsu and Alex Ballow and and Alina Lazar},






The Exa.TrkX project has applied geometric learning concepts such as metric learning and graph neural networks to HEP particle tracking. The Exa.TrkX tracking pipeline clusters detector measurements to form track candidates and filters them. The pipeline, originally developed using the TrackML dataset (a simulation of an LHC-like tracking detector), has been demonstrated on various detectors, including the DUNE LArTPC and the CMS High-Granularity Calorimeter. This paper documents new developments needed to study the physics and computing performance of the Exa.TrkX pipeline on the full TrackML dataset, a first step towards validating the pipeline using ATLAS and CMS data. The pipeline achieves tracking efficiency and purity similar to production tracking algorithms. Crucially for future HEP applications, the pipeline benefits significantly from GPU acceleration, and its computational requirements scale close to linearly with the number of particles in the event.
Rating: 1.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: