3814

Large-scale multi-dimensional document clustering on GPU clusters

Yongpeng Zhang, Frank Mueller, Xiaohui Cui, Thomas Potok
Dept. of Computer Science, North Carolina State University, Raleigh, NC 27695-7534, USA
IEEE International Symposium on Parallel & Distributed Processing (IPDPS), 2010

@conference{zhang2010large,

   title={Large-scale multi-dimensional document clustering on GPU clusters},

   author={Zhang, Y. and Mueller, F. and Cui, X. and Potok, T.},

   booktitle={Parallel & Distributed Processing (IPDPS), 2010 IEEE International Symposium on},

   pages={1–10},

   issn={1530-2075},

   organization={IEEE},

   year={2010}

}

Download Download (PDF)   View View   Source Source   

1739

views

Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteen-node moderate GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrate the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: