Speeding up K-Means Algorithm by GPUs
Dept. of Comput. Sci., Hong Kong Baptist Univ., Hong Kong, China
IEEE 10th International Conference on Computer and Information Technology (CIT), 2010
@conference{li2010speeding,
title={Speeding up K-Means Algorithm by GPUs},
author={Li, Y. and Zhao, K. and Chu, X. and Liu, J.},
booktitle={2010 10th IEEE International Conference on Computer and Information Technology (CIT 2010)},
pages={115–122},
year={2010},
organization={IEEE}
}
Cluster analysis plays a critical role in a wide variety of applications, but it is now facing the computational challenge due to the continuously increasing data volume. Parallel computing is one of the most promising solutions to overcoming the computational challenge. In this paper, we target at parallelizing k-Means, which is one of the most popular clustering algorithms, by using the widely available Graphics Processing Units (GPUs). Different from existing GPU-based k-Means algorithms, we observe that data dimension is an important factor that should be taken into consideration when parallelizing k-Means on GPUs. In particular, we use two different strategies for low-dimensional data sets and high-dimensional data sets respectively, in order to make the best use of the power of GPUs. For low-dimensional data sets, we exploit GPU on-chip registers to significantly decrease data access latency. For high-dimensional data sets, we design a novel algorithm which simulates matrix multiplication and exploits GPU on-chip registers and also on-chip shared memory to achieve high compute-to-memory-access ratio. As a result, our GPU-based k-Means algorithm is three to eight times faster than the best reported GPU-based algorithm.
April 13, 2011 by hgpu