11141

Large-Scale Paralleled Sparse Principal Component Analysis

W. Liu, H. Zhang, D. Tao, Y. Wang, K. Lu
China University of Petroleum
arXiv:1312.6182 [cs.LG], (21 Dec 2013)

@article{2013arXiv1312.6182L,

   author={Liu}, W. and {Zhang}, H. and {Tao}, D. and {Wang}, Y. and {Lu}, K.},

   title={"{Large-Scale Paralleled Sparse Principal Component Analysis}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1312.6182},

   primaryClass={"cs.LG"},

   keywords={Computer Science – Learning, Computer Science – Numerical Analysis, Statistics – Machine Learning},

   year={2013},

   month={dec},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1312.6182L},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1541

views

Principal component analysis (PCA) is a statistical technique commonly used in multivariate data analysis. However, PCA can be difficult to interpret and explain since the principal components (PCs) are linear combinations of the original variables. Sparse PCA (SPCA) aims to balance statistical fidelity and interpretability by approximating sparse PCs whose projections capture the maximal variance of original data. In this paper we present an efficient and paralleled method of SPCA using graphics processing units (GPUs), which can process large blocks of data in parallel. Specifically, we construct parallel implementations of the four optimization formulations of the generalized power method of SPCA (GP-SPCA), one of the most efficient and effective SPCA approaches, on a GPU. The parallel GPU implementation of GP-SPCA (using CUBLAS) is up to eleven times faster than the corresponding CPU implementation (using CBLAS), and up to 107 times faster than a MatLab implementation. Extensive comparative experiments in several real-world datasets confirm that SPCA offers a practical advantage.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: