29482

Fastrack: Fast IO for Secure ML using GPU TEEs

Yongqin Wang, Rachit Rajat, Jonghyun Lee, Tingting Tang, Murali Annavaram
University of Southern California, Los Angeles, CA, USA
arXiv:2410.15240 [cs.CR], (20 Oct 2024)

@misc{wang2024fastrackfastiosecure,

   title={Fastrack: Fast IO for Secure ML using GPU TEEs},

   author={Yongqin Wang and Rachit Rajat and Jonghyun Lee and Tingting Tang and Murali Annavaram},

   year={2024},

   eprint={2410.15240},

   archivePrefix={arXiv},

   primaryClass={cs.CR},

   url={https://arxiv.org/abs/2410.15240}

}

Download Download (PDF)   View View   Source Source   

325

views

As cloud-based ML expands, ensuring data security during training and inference is critical. GPU-based Trusted Execution Environments (TEEs) offer secure, high-performance solutions, with CPU TEEs managing data movement and GPU TEEs handling authentication and computation. However, CPU-to-GPU communication overheads significantly hinder performance, as data must be encrypted, authenticated, decrypted, and verified, increasing costs by 12.69 to 33.53 times. This results in GPU TEE inference becoming 54.12% to 903.9% slower and training 10% to 455% slower than non-TEE systems, undermining GPU TEE advantages in latency-sensitive applications. This paper analyzes Nvidia H100 TEE protocols and identifies three key overheads: 1) redundant CPU re-encryption, 2) limited authentication parallelism, and 3) unnecessary operation serialization. We propose Fastrack, optimizing with 1) direct GPU TEE communication, 2) parallelized authentication, and 3) overlapping decryption with PCI-e transmission. These optimizations cut communication costs and reduce inference/training runtime by up to 84.6%, with minimal overhead compared to non-TEE systems.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: