4902

Experiences with hybrid clusters

Eric Van Hensbergen, Damir Jamsek
IBM Research
Cluster Computing and Workshops, 2009. CLUSTER ’09

@inproceedings{jamsek2009experiences,

   title={Experiences with hybrid clusters},

   author={Jamsek, D. and Van Hensbergen, E.},

   booktitle={Cluster Computing and Workshops, 2009. CLUSTER’09. IEEE International Conference on},

   pages={1–4},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1380

views

The complexity of modern microprocessor design involving billions of transistors at increasingly denser scales creates many challenges particularly in the area of design reliability and predictable yields. Researchers at IBM’s Austin Research Lab have increasingly depended on software based simulation of various aspects of the design and manufacturing process to help address these challenges. The computational complexity and sheer scale of these simulations have lead to the exploration of the application of high-performance hybrid computing clusters to accelerate the design process. Currently, the hybrid clusters in use are composed primarily of commodity workstations and servers incorporating commodity NVIDIA-based GPU graphics cards and TESLA GPU computational accelerators. We have also been experimenting with blade clusters composed of both general purpose servers and PowerXcell accelerators leveraging the computational throughput of the Cell processor. In this paper we will detail our experiences with accelerating our workloads on these hybrid cluster platforms. We will discuss our initial approach of combining hybrid runtimes such as CUDA with MPI to address cluster computation. We will also describe a custom cluster hybrid infrastructure we are developing to deal with some of the perceived shortcomings of MPI and other traditional cluster tools when dealing with hybrid computing environments.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: