Parallel Support Vector Machines in Practice
Washington University in St. Louis, Department of Computer Science & Engineering, St. Louis, MO, USA
arXiv:1404.1066 [cs.LG], (3 Apr 2014)
@article{2014arXiv1404.1066T,
author={Tyree}, S. and {Gardner}, J.~R. and {Weinberger}, K.~Q. and {Agrawal}, K. and {Tran}, J.},
title={"{Parallel Support Vector Machines in Practice}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1404.1066},
primaryClass={"cs.LG"},
keywords={Computer Science – Learning},
year={2014},
month={apr},
adsurl={http://adsabs.harvard.edu/abs/2014arXiv1404.1066T},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
In this paper, we evaluate the performance of various parallel optimization methods for Kernel Support Vector Machines on multicore CPUs and GPUs. In particular, we provide the first comparison of algorithms with explicit and implicit parallelization. Most existing parallel implementations for multi-core or GPU architectures are based on explicit parallelization of Sequential Minimal Optimization (SMO) – the programmers identified parallelizable components and hand-parallelized them, specifically tuned for a particular architecture. We compare these approaches with each other and with implicitly parallelized algorithms – where the algorithm is expressed such that most of the work is done within few iterations with large dense linear algebra operations. These can be computed with highly-optimized libraries, that are carefully parallelized for a large variety of parallel platforms. We highlight the advantages and disadvantages of both approaches and compare them on various benchmark data sets. We find an approximate implicitly parallel algorithm which is surprisingly efficient, permits a much simpler implementation, and leads to unprecedented speedups in SVM training.
April 6, 2014 by hgpu