GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration
Georgia Institute of Technology, School of Electrical and Computer Engineering, Atlanta, GA, USA
arXiv:2201.08475 [cs.LG], (20 Jan 2022)
@misc{abikaram2022gengnn,
title={GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration},
author={Stefan Abi-Karam and Yuqi He and Rishov Sarkar and Lakshmi Sathidevi and Zihang Qiao and Cong Hao},
year={2022},
eprint={2201.08475},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability to ubiquitous graph-related problems such as quantum chemistry, drug discovery, and high energy physics. However, meeting demand for novel GNN models and fast inference simultaneously is challenging because of the gap between the difficulty in developing efficient FPGA accelerators and the rapid pace of creation of new GNN models. Prior art focuses on the acceleration of specific classes of GNNs but lacks the generality to work across existing models or to extend to new and emerging GNN models. In this work, we propose a generic GNN acceleration framework using High-Level Synthesis (HLS), named GenGNN, with two-fold goals. First, we aim to deliver ultra-fast GNN inference without any graph pre-processing for real-time requirements. Second, we aim to support a diverse set of GNN models with the extensibility to flexibly adapt to new models. The framework features an optimized message-passing structure applicable to all models, combined with a rich library of model-specific components. We verify our implementation on-board on the Xilinx Alveo U50 FPGA and observe a speed-up of up to 25x against CPU (6226R) baseline and 13x against GPU (A6000) baseline.
January 30, 2022 by hgpu