Singular value decomposition on GPU using CUDA
Center for Visual Information Technology, International Institute of Information Technology, Hyderabad, India
2009 IEEE International Symposium on Parallel&Distributed Processing
@article{lahabar2009singular,
title={Singular value decomposition on GPU using CUDA},
author={Lahabar, S. and Narayanan, PJ},
year={2009},
publisher={IEEE}
}
Linear algebra algorithms are fundamental to many computing applications. Modern GPUs are suited for many general purpose processing tasks and have emerged as inexpensive high performance co-processors due to their tremendous computing power. In this paper, we present the implementation of singular value decomposition (SVD) of a dense matrix on GPU using the CUDA programming model. SVD is implemented using the twin steps of bidiagonalization followed by diagonalization. It has not been implemented on the GPU before. Bidiagonalization is implemented using a series of Householder transformations which map well to BLAS operations. Diagonalization is performed by applying the implicitly shifted QR algorithm. Our complete SVD implementation outperforms the MATLAB and Intel Math Kernel Library (MKL) LAPACK implementation significantly on the CPU. We show a speedup of upto 60 over the MATLAB implementation and upto 8 over the Intel MKL implementation on a Intel Dual Core 2.66GHz PC on NVIDIA GTX 280 for large matrices. We also give results for very large matrices on NVIDIA Tesla S1070.
December 24, 2010 by hgpu