25864

QGTC: Accelerating Quantized GNN via GPU Tensor Core

Yuke Wang, Boyuan Feng, Yufei Ding
U.S.A. University of California, Santa Barbara
arXiv:2111.09547 [cs.DC], (18 Nov 2021)

@misc{wang2021qgtc,

   title={QGTC: Accelerating Quantized GNN via GPU Tensor Core},

   author={Yuke Wang and Boyuan Feng and Yufei Ding},

   year={2021},

   eprint={2111.09547},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

753

views

Over the most recent years, quantized graph neural network (QGNN) attracts lots of research and industry attention due to its high robustness and low computation and memory overhead. Unfortunately, the performance gains of QGNN have never been realized on modern GPU platforms. To this end, we propose the first Tensor Core (TC) based computing framework, QGTC, to support any-bitwidth computation for QGNNs on GPUs. We introduce a novel quantized low-bit arithmetic design based on the low-bit data representation and bit-decomposed computation. We craft a novel TC-tailored CUDA kernel design by incorporating 3D-stacked bit compression, zero-tile jumping, and non-zero tile reuse technique to improve the performance systematically. We incorporate an effective bandwidth-optimized subgraph packing strategy to maximize the transferring efficiency between CPU host and GPU device. We integrate QGTC with Pytorch for better programmability and extensibility. Extensive experiments demonstrate that QGTC achieves an average of 3.17x speedup compared with the state-of-the-art Deep Graph Library framework across diverse settings.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: