Is the GPU Half-Empty or Half-Full? Practical Scheduling Techniques for LLMs
MIT CSAIL
arXiv:2410.17840 [cs.LG]
@misc{kossmann2024gpuhalfemptyhalffullpractical,
title={Is the GPU Half-Empty or Half-Full? Practical Scheduling Techniques for LLMs},
author={Ferdi Kossmann and Bruce Fontaine and Daya Khudia and Michael Cafarella and Samuel Madden},
year={2024},
eprint={2410.17840},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2410.17840}
}
Serving systems for Large Language Models (LLMs) improve throughput by processing several requests concurrently. However, multiplexing hardware resources between concurrent requests involves non-trivial scheduling decisions. Practical serving systems typically implement these decisions at two levels: First, a load balancer routes requests to different servers which each hold a replica of the LLM. Then, on each server, an engine-level scheduler decides when to run a request, or when to queue or preempt it. Improved scheduling policies may benefit a wide range of LLM deployments and can often be implemented as "drop-in replacements" to a system’s current policy. In this work, we survey scheduling techniques from the literature and from practical serving systems. We find that schedulers from the literature often achieve good performance but introduce significant complexity. In contrast, schedulers in practical deployments often leave easy performance gains on the table but are easy to implement, deploy and configure. This finding motivates us to introduce two new scheduling techniques, which are both easy to implement, and outperform current techniques on production workload traces.
November 3, 2024 by hgpu