Infiniband-Verbs on GPU: A case study of controlling an Infiniband network device from the GPU

Lena Oden, Holger Froning, Franz-Joseph Pfreundt
Fraunhofer Institute for Industrial Mathematics, Competence Center High Perfomance Computing, Kaiserslautern, Germany
Fourth International Workshop on Accelerators and Hybrid Exascale Systems (AsHES), in conjunction with IPDPS, 2014


   title={Infiniband-Verbs on GPU: A case study of controlling an Infiniband network device from the GPU},

   author={Oden, Lena and Fr{"o}ning, Holger and Pfreundt, Franz-Joseph},

   booktitle={Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW)},



Download Download (PDF)   View View   Source Source   



Due to their massive parallelism and high performance per watt GPUs gain high popularity in high performance computing and are a strong candidate for future exacscale systems. But communication and data transfer in GPU accelerated systems remain a challenging problem. Since the GPU normally is not able to control a network device, today a hybrid-programming model is preferred, whereby the GPU is used for calculation and the CPU handles the communication. As a result, communication between distributed GPUs suffers from unnecessary overhead, introduced by switching control flow from GPUs to CPUs and vice versa. In this work, we modify user space libraries and device drivers of GPUs and the Infiniband network device in a way to enable the GPU to control an Infiniband network device to independently source and sink communication requests without any involvements of the CPU. Our performance analysis shows the differences to hybrid communication models in detail, in particular that the CPU’s advantage in generating work requests outshines the overhead associated with context switching. In other terms, our results show that complex networking protocols like IBVERBS are better handled by CPUs in spite of time penalties due to context switching, since overhead of work request generation cannot be parallelized and is not suitable with the high parallel programming model of GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: