15659

GLoP: Enabling Massively Parallel Incident Response Through GPU Log Processing

X. Bellekens, C. Tachtatzis, R. C. Atkinson, C. Renfrew, T. Kirkham
University of Strathclyde
Proceedings of the 7th International Conference on Security of Information and Networks (SIN ’14), 2014

@inproceedings{bellekens2014glop,

   title={GLoP: Enabling Massively Parallel Incident Response Through GPU Log Processing},

   author={Bellekens, Xavier JA and Tachtatzis, Christos and Atkinson, Robert C and Renfrew, Craig and Kirkham, Tony},

   booktitle={Proceedings of the 7th International Conference on Security of Information and Networks},

   pages={295},

   year={2014},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

2313

views

Large industrial systems that combine services and applications, have become targets for cyber criminals and are challenging from the security, monitoring and auditing perspectives. Security log analysis is a key step for uncovering anomalies, detecting intrusion, and enabling incident response. The constant increase of link speeds, threats and users, produce large volumes of log data and become increasingly difficult to analyse on a Central Processing Unit (CPU). This paper presents a massively parallel Graphics Processing Unit (GPU) Log Processing (GLoP) library and can also be used for Deep Packet Inspection (DPI), using a prefix matching technique, harvesting the full power of off-the-shelf technologies. GLoP implements two different algorithm using different GPU memory and is compared against CPU counterpart implementations. The library can be used for processing nodes with single or multiple GPUs as well as GPU cloud farms. The results show throughput of 20 Gbps and demonstrate that modern GPUs can be utilised to increase the operational speed of large scale log processing scenarios, saving precious time before and after an intrusion has occurred.
Rating: 2.4/5. From 39 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: