High Performance Relevance Vector Machine on GPUs

Depeng Yang, Getao Liang, David D. Jenkins, Gregory D. Peterson, and Husheng Li
Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, 37996
Symposium on Application Accelerators in High Performance Computing, 2010


   title={High Performance Relevance Vector Machine on GPUs},

   author={Yang, D. and Liang, G. and Jenkins, D.D. and Peterson, G.D. and Li, H.},

   booktitle={Application Accelerators in High Performance Computing, 2010 Symposium, Papers},



Download Download (PDF)   View View   Source Source   



The Relevance Vector Machine (RVM) algorithm has been widely utilized in many applications, such as machine learning, image pattern recognition, and compressed sensing. However, the RVM algorithm is computationally expensive. We seek to accelerate the RVM algorithm computation for time sensitive applications by utilizing massively parallel accelerators such as GPUs. In this paper, the computation procedure of the RVM algorithm is fully analyzed. Recursive Cholesky decomposition, the key step in the RVM algorithm, is implemented on GPUs. The GPU performance is compared with a CPU using LAPACK and a hybrid system using the MAGMA library. Results show that our GPU implementation in both single and double precision is approximately 4 times faster than the CPU using LAPACK and faster than the hybrid MAGMA code when the matrix size is small.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: