Deep Learning Model Security: Threats and Defenses
Xi’an Jiaotong-Liverpool University
arXiv:2412.08969 [cs.CR], (12 Dec 2024)
@misc{wang2024deeplearningmodelsecurity,
title={Deep Learning Model Security: Threats and Defenses},
author={Tianyang Wang and Ziqian Bi and Yichao Zhang and Ming Liu and Weiche Hsieh and Pohsun Feng and Lawrence K. Q. Yan and Yizhu Wen and Benji Peng and Junyu Liu and Keyu Chen and Sen Zhang and Ming Li and Chuanqi Jiang and Xinyuan Song and Junjie Yang and Bowen Jing and Jintao Ren and Junhao Song and Hong-Ming Tseng and Silin Chen and Yunze Wang and Chia Xin Liang and Jiawei Xu and Xuanhe Pan and Jinlang Wang and Qian Niu},
year={2024},
eprint={2412.08969},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2412.08969}
}
Deep learning has transformed AI applications but faces critical security challenges, including adversarial attacks, data poisoning, model theft, and privacy leakage. This survey examines these vulnerabilities, detailing their mechanisms and impact on model integrity and confidentiality. Practical implementations, including adversarial examples, label flipping, and backdoor attacks, are explored alongside defenses such as adversarial training, differential privacy, and federated learning, highlighting their strengths and limitations. Advanced methods like contrastive and self-supervised learning are presented for enhancing robustness. The survey concludes with future directions, emphasizing automated defenses, zero-trust architectures, and the security challenges of large AI models. A balanced approach to performance and security is essential for developing reliable deep learning systems.
December 15, 2024 by hgpu