3597

The development of Mellanox/NVIDIA GPUDirect over InfiniBand-a new model for GPU to GPU communications

Gilad Shainer, Ali Ayoub, Pak Lui, Tong Liu, Michael Kagan, Christian Trott, Greg Scantlen, Paul Crozier
HPC Advisory Council, Sunnyvale, CA, USA
Computer Science – Research and Development (8 April 2011), pp. 1-7.

@article{shainerdevelopment,

   title={The development of Mellanox/NVIDIA GPUDirect over InfiniBand-a new model for GPU to GPU communications},

   author={Shainer, G. and Ayoub, A. and Lui, P. and Liu, T. and Kagan, M. and Trott, C.R. and Scantlen, G. and Crozier, P.S.},

   journal={Computer Science-Research and Development},

   pages={1–7},

   issn={1865-2034},

   publisher={Springer}

}

Source Source   

1537

views

The usage and adoption of General Purpose GPUs (GPGPU) in HPC systems is increasing due to the unparalleled performance advantage of the GPUs and the ability to fulfill the ever-increasing demands for floating points operations. While the GPU can offload many of the application parallel computations, the system architecture of a GPU-CPU-InfiniBand server does require the CPU to initiate and manage memory transfers between remote GPUs via the high speed InfiniBand network. In this paper we introduce for the first time a new innovative technology-GPUDirect that enables Tesla GPUs to transfer data via InfiniBand without the involvement of the CPU or buffer copies, hence dramatically reducing the GPU communication time and increasing overall system performance and efficiency. We also explore for the first time the performance benefits of GPUDirect using Amber and LAMMPS applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: