29499

Is the GPU Half-Empty or Half-Full? Practical Scheduling Techniques for LLMs

Ferdi Kossmann, Bruce Fontaine, Daya Khudia, Michael Cafarella, Samuel Madden
MIT CSAIL
arXiv:2410.17840 [cs.LG]

@misc{kossmann2024gpuhalfemptyhalffullpractical,

   title={Is the GPU Half-Empty or Half-Full? Practical Scheduling Techniques for LLMs},

   author={Ferdi Kossmann and Bruce Fontaine and Daya Khudia and Michael Cafarella and Samuel Madden},

   year={2024},

   eprint={2410.17840},

   archivePrefix={arXiv},

   primaryClass={cs.LG},

   url={https://arxiv.org/abs/2410.17840}

}

Download Download (PDF)   View View   Source Source   

342

views

Serving systems for Large Language Models (LLMs) improve throughput by processing several requests concurrently. However, multiplexing hardware resources between concurrent requests involves non-trivial scheduling decisions. Practical serving systems typically implement these decisions at two levels: First, a load balancer routes requests to different servers which each hold a replica of the LLM. Then, on each server, an engine-level scheduler decides when to run a request, or when to queue or preempt it. Improved scheduling policies may benefit a wide range of LLM deployments and can often be implemented as "drop-in replacements" to a system’s current policy. In this work, we survey scheduling techniques from the literature and from practical serving systems. We find that schedulers from the literature often achieve good performance but introduce significant complexity. In contrast, schedulers in practical deployments often leave easy performance gains on the table but are easy to implement, deploy and configure. This finding motivates us to introduce two new scheduling techniques, which are both easy to implement, and outperform current techniques on production workload traces.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: