30170

Managing Multi Instance GPUs for High Throughput and Energy Savings

Abhijeet Saraha, Yuanbo Li, Chris Porter, Santosh Pande
Georgia Institute of Technology
@misc{saraha2025managingmultiinstancegpus,

   title={Managing Multi Instance GPUs for High Throughput and Energy Savings},

   author={Abhijeet Saraha and Yuanbo Li and Chris Porter and Santosh Pande},

   year={2025},

   eprint={2508.18556},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2508.18556}

}

Download Download (PDF)   View View   Source Source   

431

views

Focus to learn morModern GPUs such as the Ampere series (A30, A100) as well as the Hopper series (H100, H200) offer performance as well as security isolation features. They also support a good amount of concurrency, but taking advantage of it can be quite challenging due to the complex constraints on partitioning the chip. In this work, we develop partitioning and scheduling schemes for a variety of workloads, ranging from scientific to modern ML workloads, including LLMs. We develop several schemes involving dynamic memory estimation, partition fusion and partition fission. We also support process restart to recover from out-of-memory errors for workloads and early restart as an optimization. This approach yields up to 6.20x throughput and 5.93x energy improvements for general workloads; and we see 1.59x and 1.12x improvement to throughput and energy, respectively, for ML workloads on an A100 GPU. We leverage this technique on LLM workloads and show good improvements, including up to 1.43x throughput improvement and 1.11x energy savings.

Rating: 5.0/5. From 1 vote.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: