14999

Efficient Resource Sharing Through GPU Virtualization on Accelerated High Performance Computing Systems

Teng Li, Vikram K. Narayana, Tarek El-Ghazawi
NSF Center for High-Performance Reconfigurable Computing (CHREC), Deparment of Electrical and Computer Engineering, The George Washington University, 801 22nd Street NW, Washington, DC, 20052, USA
arXiv:1511.07658 [cs.DC], (24 Nov 2015)

@article{li2015efficient,

   title={Efficient Resource Sharing Through GPU Virtualization on Accelerated High Performance Computing Systems},

   author={Li, Teng and Narayana, Vikram K. and El-Ghazawi, Tarek},

   year={2015},

   month={nov},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1621

views

The High Performance Computing (HPC) field is witnessing a widespread adoption of Graphics Processing Units (GPUs) as co-processors for conventional homogeneous clusters. The adoption of prevalent Single-Program Multiple-Data (SPMD) programming paradigm for GPU-based parallel processing brings in the challenge of resource underutilization, with the asymmetrical processor/co-processor distribution. In other words, under SPMD, balanced CPU/GPU distribution is required to ensure full resource utilization. In this paper, we propose a GPU resource virtualization approach to allow underutilized microprocessors to efficiently share the GPUs. We propose an efficient GPU sharing scenario achieved through GPU virtualization and analyze the performance potentials through execution models. We further present the implementation details of the virtualization infrastructure, followed by the experimental analyses. The results demonstrate considerable performance gains with GPU virtualization. Furthermore, the proposed solution enables full utilization of asymmetrical resources, through efficient GPU sharing among microprocessors, while incurring low overhead due to the added virtualization layer.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: