12931

A Complete and Efficient CUDA-Sharing Solution for HPC Clusters

Antonio J. Pena, Carlos Reano, Federico Silla, Rafael Mayo, Enrique S. Quintana-Orti, Jose Duato
MCS, Argonne National Laboratory, Argonne, IL 60439, USA
Technical report ANL/MCS-P5137-0514, 2014

@article{pena2014complete,

   title={A Complete and Efficient CUDA-Sharing Solution for HPC Clusters},

   author={Pe{~n}a, Antonio J and Rea{~n}o, Carlos and Silla, Federico and Mayo, Rafael and Quintana-Ort{‘i}, Enrique S and Duato, Jos{‘e}},

   journal={Parallel Computing},

   year={2014},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   

1754

views

In this paper we detail the key features, architectural design, and implementation of rCUDA, an advanced framework to enable remote and transparent GPGPU acceleration in HPC clusters. rCUDA allows decoupling GPUs from nodes, forming pools of shared accelerators, which brings enhanced flexibility to cluster configurations. This opens the door to configurations with fewer accelerators than nodes, as well as permits a single node to exploit the whole set of GPUs installed in the cluster. In our proposal, CUDA applications can seamlessly interact with any GPU in the cluster, independently of its physical location. Thus, GPUs can be either distributed among compute nodes or concentrated in dedicated GPGPU servers, depending on the cluster administrator’s policy. This proposal leads to savings not only in space but also in energy, acquisition, and maintenance costs. The performance evaluation in this paper with a series of benchmarks and a production application clearly demonstrates the viability of this proposal. Concretely, experiments with the matrix-matrix product reveal excellent performance compared with regular executions on the local GPU; on a much more complex application, the GPU-accelerated LAMMPS, we attain up to 11x speedup employing 8 remote accelerators from a single node with respect to a 12-core CPU-only execution. GPGPU service interaction in compute nodes, remote acceleration in dedicated GPGPU servers, and data transfer performance of similar GPU virtualization frameworks are also evaluated.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: