FPGA vs. GPU for sparse matrix vector multiply

Yan Zhang, Yasser H. Shalabi, Rishabh Jain, Krishna K. Nagar, Jason D. Bakos
Dept. of Computer Science and Engineering, Univ. of South Carolina, Columbia, SC 29208 USA
2009 International Conference on FieldProgrammable Technology (2009) Publisher: Ieee, Pages: 255-262


   title={FPGA vs. GPU for sparse matrix vector multiply},

   author={Zhang, Y. and Shalabi, Y.H. and Jain, R. and Nagar, K.K. and Bakos, J.D.},

   booktitle={Field-Programmable Technology, 2009. FPT 2009. International Conference on},




Download Download (PDF)   View View   Source Source   



Sparse matrix-vector multiplication (SpMV) is a common operation in numerical linear algebra and is the computational kernel of many scientific applications. It is one of the original and perhaps most studied targets for FPGA acceleration. Despite this, GPUs, which have only recently gained both general-purpose programmability and native support for double precision floating-point arithmetic, are viewed by some as a more effective platform for SpMV and similar linear algebra computations. In this paper, we present an analysis comparing an existing GPU SpMV implementation to our own, novel FPGA implementation. In this analysis, we describe the challenges faced by any SpMV implementation, the unique approaches to these challenges taken by both FPGA and GPU implementations, and their relative performance for SpMV.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: