Serve Programs, Not Prompts
Yale University, New Haven, CT, USA
arXiv:2510.25412 [cs.CL], (29 Oct 2025)
@inproceedings{Gim_2025,
series={HOTOS ’25},
title={Serve Programs, Not Prompts},
url={http://dx.doi.org/10.1145/3713082.3730398},
DOI={10.1145/3713082.3730398},
booktitle={Proceedings of the Workshop on Hot Topics in Operating Systems},
publisher={ACM},
author={Gim, In and Zhong, Lin},
year={2025},
month={may},
pages={179–186},
collection={HOTOS ’25}
}
Current large language model (LLM) serving systems, primarily designed for text completion, are neither efficient nor adaptable for increasingly complex LLM applications due to their inflexible design. We propose a new LLM serving system architecture that serves programs instead of prompts to address this problem. These programs, called LLM Inference Programs (LIPs), allow users to customize token prediction and KV cache management at runtime and to offload parts of their application logic, such as tool execution, to the server. We describe an example of this architecture through a system named Symphony, which functions as an operating system for LIPs. Symphony exposes LLM model computations via system calls and virtualizes KV cache with a dedicated file system, while ensuring GPU efficiency with a two-level process scheduling scheme. Symphony has the potential to open the door to a more efficient and extensible ecosystem for LLM applications.
November 2, 2025 by hgpu
Your response
You must be logged in to post a comment.




