26070

FSpGEMM: An OpenCL-based HPC Framework for Accelerating General Sparse Matrix-Matrix Multiplication on FPGAs

Erfan Bank Tavakoli, Michael Riera, Masudul Hassan Quraishi, Fengbo Ren
School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ 85281
arXiv:2112.10037 [cs.PF], (19 Dec 2021)

@misc{tavakoli2021fspgemm,

   title={FSpGEMM: An OpenCL-based HPC Framework for Accelerating General Sparse Matrix-Matrix Multiplication on FPGAs},

   author={Erfan Bank Tavakoli and Michael Riera and Masudul Hassan Quraishi and Fengbo Ren},

   year={2021},

   eprint={2112.10037},

   archivePrefix={arXiv},

   primaryClass={cs.PF}

}

Download Download (PDF)   View View   Source Source   

1068

views

General sparse matrix-matrix multiplication (SpGEMM) is an integral part of many scientific computing, high-performance computing (HPC), and graph analytic applications. This paper presents a new compressed sparse vector (CSV) format for representing sparse matrices and FSpGEMM, an OpenCL-based HPC framework for accelerating general sparse matrix-matrix multiplication on FPGAs. The proposed FSpGEMM framework includes an FPGA kernel implementing a throughput-optimized hardware architecture based on Gustavson’s algorithm and a host program implementing pre-processing functions for converting input matrices to the CSV format tailored for the proposed architecture. FSpGEMM utilizes a new buffering scheme tailored to Gustavson’s algorithm. We compare FSpGEMM implemented on an Intel Arria 10 GX FPGA development board with Intel Math Kernel Library (MKL) implemented on an Intel Xeon E5-2637 CPU and cuSPARSE on an NVIDIA GTX TITAN X GPU, respectively, for multiplying a set of sparse matrices selected from SuiteSparse Matrix Collection. The experiment results show that the proposed FSpGEMM solution achieves on average 4.9x and 1.7x higher performance with 31.9x and 13.1x lower energy consumption per SpGEMM computation than the CPU and GPU implementations, respectively.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: