29586

Guardian: Safe GPU Sharing in Multi-Tenant Environments

Manos Pavlidakis, Giorgos Vasiliadis, Anargyros Argyros, Stelios Mavridis, Antony Chazapis, Angelos Bilas
FORTH-ICS, Greece
25th International Middleware Conference (MIDDLEWARE ’24), 2024

@inproceedings{pavlidakis2024guardian,

   title={Guardian: Safe GPU Sharing in Multi-Tenant Environments},

   author={Pavlidakis, Manos and Vasiliadis, Giorgos and Mavridis, Stelios and Argyros, Anargyros and Chazapis, Antony and Bilas, Angelos},

   booktitle={Proceedings of the 25th International Middleware Conference},

   pages={313–326},

   year={2024}

}

Download Download (PDF)   View View   Source Source   

201

views

Modern GPU applications, such as machine learning (ML), can only partially utilize GPUs, leading to GPU underutilization in cloud environments. Sharing GPUs across multiple applications from different tenants can improve resource utilization and consequently cost, energy, and power efficiency. However, GPU sharing creates memory safety concerns because kernels must share a single GPU address space. Existing spatial-sharing mechanisms either lack fault isolation for memory accesses or require static partitioning, which leads to limited deployability or low utilization. In this paper, we present Guardian, a PTX-level bounds checking approach that provides memory isolation and supports dynamic GPU spatial-sharing. Guardian relies on three mechanisms: (1) It divides the common GPU address space into separate partitions for different applications. (2) It intercepts and checks all GPU related calls at the lowest level, fencing erroneous operations. (3) It instruments all GPU kernels at the PTX level –available in closed GPU libraries– fencing all kernel memory accesses outside application memory bounds. Guardian’s approach is transparent to applications and supports real-life frameworks, such as Caffe and PyTorch, that issue billions of GPU kernels. Our evaluation shows that Guardian’s overhead compared to native for such frameworks is between 4% – 12% and on average 9%.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: