6451

Scalable Data Clustering using GPU Clusters

Andrew Pangborn, Gregor von Laszewski, James Cavenaugh, Muhammad Shaaban, Roy Melton
David H. Smith Center for Vaccine Biology and Immunology, School of Medicine and Dentistry, The University of Rochester, 601 Elmwood Ave., Box 609, Rochester, NY 14642
The University of Rochester, 2011

@article{pangborn2011scalable,

   title={Scalable Data Clustering using GPU Clusters},

   author={Pangborn, A. and von Laszewski, G. and Cavenaugh, J. and Shaaban, M. and Melton, R.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

1717

views

The computational demands of multivariate clustering grow rapidly, and therefore processing large data sets, like those found in flow cytometry data, is very time consuming on a single CPU. Fortunately these techniques lend themselves naturally to large scale parallel processing. To address the computational demands, graphics processing units, specifically NVIDIA’s CUDA framework and Tesla architecture, were investigated as a low-cost, high performance solution to a number of clustering algorithms. C-means and Expectation Maximization with Gaussian mixture models were implemented using the CUDA framework. The algorithm implementations use a hybrid of CUDA, OpenMP, and MPI to scale to many GPUs on multiple nodes in a high performance computing environment. This framework is envisioned as part of a larger cloud-based workflow service where biologists can apply multiple algorithms and parameter sweeps to their data sets and quickly receive a thorough set of results that can be further analyzed by experts. Improvements over previous GPU-accelerated implementations range from 1.42x to 21x for C-means and 3.72x to 5.65x for the Gaussian mixture model on non-trivial data sets. Using a single NVIDIA GTX 260 speedups are on average 90x for C-means and 74x for Gaussians with flow cytometry files compared to optimized C code running on a single core of a modern Intel CPU. Using the TeraGrid "Lincoln" high performance cluster at NCSA C-means achieves 42% parallel efficiency and a CPU speedup of 4794x with 128 Tesla C1060 GPUs. The Gaussian mixture model achieves 72% parallel efficiency and a CPU speedup of 6286x.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: