10736

Efficient SVM Training Using Parallel Primal-Dual Interior Point Method on GPU

Jing Jin, Xianggao Cai, Xiaola Lin
School of Information, Science and Technology, Sun Yat-sen University, Guangzhou, 510275, China
14’th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT’13), 2013

@article{jin2013efficient,

   title={Efficient SVM Training Using Parallel Primal-Dual Interior Point Method on GPU},

   author={Jin, Jing and Cai, Xianggao and Lin, Xiaola},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1508

views

The training of SVM can be viewed as a Convex Quadratic Programming (CQP) problem which becomes difficult to be solved when dealing with the large scale data sets. Traditional methods such as Sequential Minimal Optimization (SMO) for SVM training is used to solve a sequence of small scale sub-problems, which costs a large amount of computation time and is hard to be accelerated by utilizing the computation power of GPU. Although Interior Point Method (IPM) such as primal-dual interior point method (PDIPM) can be also addressed SVM training well and has favourable potential for parallelizing on GPU, it contains comparatively high time complexity O(n3) and space complexity O(n2), where n is the number of training instances. Fortunately, by invoking low-rank approximation methods such as Incomplete Cholesky Factorization (ICF) and Sherman Morrison Woodbury formula (SMW), the requirements of both storage and computation of PDIPM can be reduced significantly. In this paper, a parallel PDIPM method (P-PDIPM) along with a parallel ICF method (P-ICF) is proposed to accelerate the SVM training on GPU. Experimental results indicate that the training speed of P-PDIPM on GPU is almost 40x faster than that of the serial one (S-PDIPM) on CPU. Besides, without extensive optimizations, P-PDIPM can obtain about 8x speedup over the state of the art tool LIBSVM while maintaining high prediction accuracy.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: