1181

Faster matrix-vector multiplication on GeForce 8800GTX

N. Fujimoto
Graduate School of Information Science and Technology, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka, 560-8531, Japan
Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on In Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on (2008), pp. 1-8.

@conference{fujimoto2008faster,

   title={Faster matrix-vector multiplication on GeForce 8800GTX},

   author={Fujimoto, N.},

   booktitle={Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on},

   pages={1–8},

   issn={1530-2075},

   year={2008},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

785

views

Recently a GPU has acquired programmability to perform general purpose computation fast by running ten thousands of threads concurrently. This paper presents a new algorithm for dense matrix-vector multiplication on NVIDIA CUDA architecture. The experimental results on GeForce 8800GTX show that the proposed algorithm runs maximum 15.69 (resp., 32.88) times faster than the sgemv routine in NVIDIA’s BIAS library CUBLAS 1.1 (resp., Intel Math Kernel Library 9.1 on one-core of 2.0 GHz Intel Xeon E5335 CPU with SSE3 SIMD instructions) for matrices with order 16 to 12800. The performance, including the data transfer between CPU and GPU, of Jacobi’s iterative method for solving linear equations shows that the proposed algorithm is practical for some real applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: