6829

Implementing Parallel SMO to Train SVM on CUDA-Enabled Systems

Zhong Chen, Jack Chua
Department of Computational Science and Engineering, Georgia Institute of Technology
Georgia Institute of Technology, 2011

@article{chen2011implementation,

   title={Implementing Parallel SMO to Train SVM on CUDA-Enabled Systems},

   author={Chen, Zhong and Chua, Jack},

   year={2011}

}

Source Source   

2346

views

We implement a Sequential Minimal Optimization type algorithm to solve for the Lagrangian weights of the dual form of the Support Vector Machine problem. Unlike the original SMO algorithm, the modified SMO algorithm uses a first-order variable selection heuristic to avoid explicit computation of the KKT conditions. Parallelism in the algorithm is exposed via a divide-and-conquer approach, and via a CUDA-optimized reduction algorithm. Finally, we benchmark our implementation against LibSVM, an industrial-quality CPU implementation of SVM. Utilizing three different binary classification datasets, we test for weak scaling by enlarging our datasets via bootstrap resampling. We obtain speedups up to about 9x while retaining reasonable accuracy in the number of support vectors and the offset vector b.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: