Efficient Parallel Algorithm for Nonlinear Dimensionality Reduction on GPU
Inst. of Inf. Sci., Acad. Sinica Taipei, Taipei, Taiwan
IEEE International Conference on Granular Computing (GrC), 2010
@conference{yeh2010efficient,
title={Efficient Parallel Algorithm for Nonlinear Dimensionality Reduction on GPU},
author={Yeh, T.T. and Chen, T.Y. and Chen, Y.C. and Shih, W.K.},
booktitle={2010 IEEE International Conference on Granular Computing},
pages={592–597},
year={2010},
organization={IEEE}
}
Advances in nonlinear dimensionality reduction provide a way to understand and visualize the underlying structure of complex data sets. The performance of large-scale nonlinear dimensionality reduction is of key importance in data mining, machine learning, and data analysis. In this paper, we concentrate on improving the performance of nonlinear dimensionality reduction using large-scale data sets on the GPU. In particular, we focus on solving problems including k nearest neighbor (KNN) search and sparse spectral decomposition for large-scale data, and propose an efficient framework for Local Linear Embedding (LLE). We implement a k-d tree based KNN algorithm and Krylov subspace method on the GPU to accelerate the nonlinear dimensionality reduction for large-scale data. Our results enable GPU-based k-d tree LLE processes of up to about 30-60 X faster compared to the brute force KNN LLE model on the CPU. Overall, our methods save O (n2-6n-2k-3) memory space.
April 3, 2011 by hgpu