Construction of a Virtual Cluster by Integrating PCI Pass-Through for GPU and InfiniBand Virtualization in Cloud

Chao-Tung Yang, Wei-Shen Ou
Department of Computer Science, Tunghai University, Taichung City, 40704 Taiwan ROC
14’th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT’13), 2013


   title={Construction of a Virtual Cluster by Integrating PCI Pass-Through for GPU and InfiniBand Virtualization in Cloud},

   author={Yang, Chao-Tung and Ou, Wei-Shen},



Download Download (PDF)   View View   Source Source   



At present, NVIDIA’s CUDA can support programmers to develop highly parallel applications. It utilizes some parallel construct concepts: hierarchical thread blocks, shared memory, and barrier synchronization. CUDA development programs can be used to achieve amazing acceleration. The graphics processor is able to play an important role in cloud computing in a cluster environment, because it can be used to build a high-performance computing environment. In the cloud architecture, virtualization plays a very important part. Any virtual machine built with the NVIDIA graphics card will have the CUDA high-performance computing ability. This makes the virtual machine have not only virtual CPUs, but also physical graphics processors to do computations, resulting in much improvement in performance of the virtual machine. InfiniBand, a high-bandwidth and low-latency high-speed interconnect, is widely used in the field of high performance computing. Xen, KVM, and VMware are well-known virtualization platforms that have good performance and stability. In this work, a high-performance cluster computing environment is achieved on Xen, KVM, and VMware virtual platforms, with graphics processors that have powerful computing capabilities, and Infiniband that is faster than Ethernet as the transmission medium. Finally, the High Performance Linpack (HPL) benchmark is used in the experiment to test the computing capability of the cluster computing environment.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: