Biomedical image analysis on a cooperative cluster of GPUs and multicores

Timothy D.R. Hartley, Umit Catalyurek, Antonio Ruiz, Francisco Igual, Rafael Mayo, Manuel Ujaldon
Departments of Biomedical Informatics and Electrical and Computer Engineering, The Ohio State University, Columbus, OH, USA
Proceedings of the 22nd annual international conference on Supercomputing, ICS ’08


   title={Biomedical image analysis on a cooperative cluster of GPUs and multicores},

   author={Hartley, T.D.R. and Catalyurek, U. and Ruiz, A. and Igual, F. and Mayo, R. and Ujaldon, M.},

   booktitle={Proceedings of the 22nd annual international conference on Supercomputing},





Download Download (PDF)   View View   Source Source   



We are currently witnessing the emergence of two paradigms in parallel computing: streaming processing and multi-core CPUs. Represented by solid commercial products widely available in commodity PCs, GPUs and multi-core CPUs bring together an unprecedented combination of high performance at low cost. The scientific computing community needs to keep pace with application models and middleware which scale efficiently to hundreds of internal processing units. The purpose of the work we present here is twofold: first, a cooperative environment is designed so that both parallel models can coexist and complement one another. Second, beyond the parallelism of multiple internal cores, further parallelism is introduced when multiple CPU sockets, multiple GPUs, and multiple nodes are combined within a unique multi-processor platform which exceeds 10 TFLOPS when using 16 nodes. We illustrate our cooperative parallelization approach by implementing a large-scale, biomedical image analysis application which contains a number of assorted kernels including typical streaming operators, co-occurrence matrices, convolutions, and histograms. Experimental results are compared among different implementation strategies and almost linear speed-up is achieved when all coexisting methods in CPUs and GPUs are combined.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: