LeftoverLocals: Listening to LLM Responses Through Leaked GPU Local Memory
Trail of Bits, University of California, Santa Cruz
arXiv:2401.16603 [cs.CR], (29 Jan 2024)
@misc{sorensen2024leftoverlocals,
title={LeftoverLocals: Listening to LLM Responses Through Leaked GPU Local Memory},
author={Tyler Sorensen and Heidy Khlaaf},
year={2024},
eprint={2401.16603},
archivePrefix={arXiv},
primaryClass={cs.CR}
}
This paper describes LeftoverLocals: a vulnerability that allows data recovery from GPU memory created by another process on Apple, Qualcomm, and AMD GPUs. LeftoverLocals impacts the security posture of GPU applications, with particular significance to LLMs and ML models that run on impacted GPUs. By recovering local memory, an optimized GPU memory region, we built a PoC where an attacker can listen into another user’s interactive LLM session (e.g., llama.cpp) across process or container boundaries.
February 4, 2024 by hgpu