12759

A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing

Quan Zhou, Wenlin Chen, Shiji Song, Jacob R. Gardner, Kilian Q. Weinberger, Yixin Chen
Tsinghua University, Beijing 100084, China
arXiv:1409.1976 [stat.ML], (6 Sep 2014)

@article{2014arXiv1409.1976Z,

   author={Zhou}, Q. and {Chen}, W. and {Song}, S. and {Gardner}, J.~R. and {Weinberger}, K.~Q. and {Chen}, Y.},

   title={"{A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1409.1976},

   primaryClass={"stat.ML"},

   keywords={Statistics – Machine Learning, Computer Science – Learning},

   year={2014},

   month={sep},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1409.1976Z},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

2227

views

The past years have witnessed many dedicated open-source projects that built and maintain implementations of Support Vector Machines (SVM), parallelized for GPU, multi-core CPUs and distributed systems. Up to this point, no comparable effort has been made to parallelize the Elastic Net, despite its popularity in many high impact applications, including genetics, neuroscience and systems biology. The first contribution in this paper is of theoretical nature. We establish a tight link between two seemingly different algorithms and prove that Elastic Net regression can be reduced to SVM with squared hinge loss classification. Our second contribution is to derive a practical algorithm based on this reduction. The reduction enables us to utilize prior efforts in speeding up and parallelizing SVMs to obtain a highly optimized and parallel solver for the Elastic Net and Lasso. With a simple wrapper, consisting of only 11 lines of MATLAB code, we obtain an Elastic Net implementation that naturally utilizes GPU and multi-core CPUs. We demonstrate on twelve real world data sets, that our algorithm yields identical results as the popular (and highly optimized) glmnet implementation but is one or several orders of magnitude faster.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: