Evaluating GPU Passthrough in Xen for High Performance Cloud Computing

Andrew J. Younge, John Paul Walters, Stephen Crago, Geoffrey C. Fox
Pervasive Technology Institute, Indiana University, 2719 E 10th St., Bloomington, IN 47408, U.S.A.
Workshop on High-Performance Grid and Cloud Computing (HPGC), 2014


   title={Evaluating GPU Passthrough in Xen for High Performance Cloud Computing},

   author={Younge, Andrew J. and Walters, John Paul and Crago, Stephen and Fox, Geoffrey C.},



Download Download (PDF)   View View   Source Source   



With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their technical computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities clouds provide, as well as many novel computing paradigms available for data-intensive applications. However, there is concern about a performance gap that exists between the performance of IaaS when compared to typical high performance computing (HPC) resources, which could limit the applicability of IaaS for many potential scientific users. Most recently, general-purpose graphics processing units (GPGPUs or GPUs) have become commonplace within high performance computing. We look to bridge the gap between supercomputing and clouds by providing GPU-enabled virtual machines (VMs) and investigating their feasibility for advanced scientific computation. Specifically, the Xen hypervisor is utilized to leverage specialized hardware-assisted I/O virtualization and PCI passthrough in order to provide advanced HPC-centric Nvidia GPUs directly in guest VMs. This methodology is evaluated by measuring the performance of two Nvidia Tesla GPUs within Xen VMs and comparing to bare-metal hardware. Results show PCI passthrough of GPUs within virtual machines is a viable use case for many scientific computing workflows, and could help support high performance cloud infrastructure in the near future.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: