29553

SoK: A Systems Perspective on Compound AI Threats and Countermeasures

Sarbartha Banerjee, Prateek Sahu, Mulong Luo, Anjo Vahldiek-Oberwagner, Neeraja J. Yadwadkar, Mohit Tiwari
The University of Texas at Austin
arXiv:2411.13459 [cs.CR], (20 Nov 2024)

@misc{banerjee2024soksystemsperspectivecompound,

   title={SoK: A Systems Perspective on Compound AI Threats and Countermeasures},

   author={Sarbartha Banerjee and Prateek Sahu and Mulong Luo and Anjo Vahldiek-Oberwagner and Neeraja J. Yadwadkar and Mohit Tiwari},

   year={2024},

   eprint={2411.13459},

   archivePrefix={arXiv},

   primaryClass={cs.CR},

   url={https://arxiv.org/abs/2411.13459}

}

Download Download (PDF)   View View   Source Source   

447

views

Large language models (LLMs) used across enterprises often use proprietary models and operate on sensitive inputs and data. The wide range of attack vectors identified in prior research – targeting various software and hardware components used in training and inference – makes it extremely challenging to enforce confidentiality and integrity policies. As we advance towards constructing compound AI inference pipelines that integrate multiple large language models (LLMs), the attack surfaces expand significantly. Attackers now focus on the AI algorithms as well as the software and hardware components associated with these systems. While current research often examines these elements in isolation, we find that combining cross-layer attack observations can enable powerful end-to-end attacks with minimal assumptions about the threat model. Given, the sheer number of existing attacks at each layer, we need a holistic and systemized understanding of different attack vectors at each layer. This SoK discusses different software and hardware attacks applicable to compound AI systems and demonstrates how combining multiple attack mechanisms can reduce the threat model assumptions required for an isolated attack. Next, we systematize the ML attacks in lines with the Mitre Att&ck framework to better position each attack based on the threat model. Finally, we outline the existing countermeasures for both software and hardware layers and discuss the necessity of a comprehensive defense strategy to enable the secure and high-performance deployment of compound AI systems.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: