27431

Benchmarking GPU and TPU Performance with Graph Neural Networks

Xiangyang Ju, Yunsong Wang, Daniel Murnane, Nicholas Choma, Steven Farrell, Paolo Calafiura
Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
arXiv:2210.12247 [cs.LG], (21 Oct 2022)

@misc{https://doi.org/10.48550/arxiv.2210.12247,

   doi={10.48550/ARXIV.2210.12247},

   url={https://arxiv.org/abs/2210.12247},

   author={Ju, xiangyang and Wang, Yunsong and Murnane, Daniel and Choma, Nicholas and Farrell, Steven and Calafiura, Paolo},

   keywords={Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Benchmarking GPU and TPU Performance with Graph Neural Networks},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution Share Alike 4.0 International}

}

Download Download (PDF)   View View   Source Source   

784

views

Many artificial intelligence (AI) devices have been developed to accelerate the training and inference of neural networks models. The most common ones are the Graphics Processing Unit (GPU) and Tensor Processing Unit (TPU). They are highly optimized for dense data representations. However, sparse representations such as graphs are prevalent in many domains, including science. It is therefore important to characterize the performance of available AI accelerators on sparse data. This work analyzes and compares the GPU and TPU performance training a Graph Neural Network (GNN) developed to solve a real-life pattern recognition problem. Characterizing the new class of models acting on sparse data may prove helpful in optimizing the design of deep learning libraries and future AI accelerators.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: