19205

Hacking Neural Networks: A Short Introduction

Michael Kissner
arXiv:1911.07658 [cs.CR], (18 Nov 2019)

@misc{kissner2019hacking,

   title={Hacking Neural Networks: A Short Introduction},

   author={Michael Kissner},

   year={2019},

   eprint={1911.07658},

   archivePrefix={arXiv},

   primaryClass={cs.CR}

}

A large chunk of research on the security issues of neural networks is focused on adversarial attacks. However, there exists a vast sea of simpler attacks one can perform both against and with neural networks. In this article, we give a quick introduction on how deep learning in security works and explore the basic methods of exploitation, but also look at the offensive capabilities deep learning enabled tools provide. All presented attacks, such as backdooring, GPU-based buffer overflows or automated bug hunting, are accompanied by short open-source exercises for anyone to try out.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: