8331

GPU Accelerated NIDS Search

Kristian Nordhaug
Department of Computer Science and Media Technology, Gjovik University College
Gjovik University College, 2012

@article{nordhaug2012gpu,

   title={GPU Accelerated NIDS Search},

   author={Nordhaug, K.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

2669

views

Network Intrusion Detection System (NIDS) analyzes network traffic for malicious activities and report’s findings from events that intend to compromise the security of the computers and other equipment. NIDS looks into both headers and payloads of the network packets to identify possible intrusions. NIDS models that only use Central Processing Units (CPU) such as the IDS Snort, have in the last decade struggled with the CPU as the bottleneck of the system. Network traffic has been increasing more rapidly than the clock-speed of CPUs. The CPUs have gained more cores, but lack implementation for utilizing multi-core CPUs and are unable to cope with the bandwidth throughput we are starting to see in high-tech network infrastructure that they are set to protect. The massive flows of data packets overload the NIDS and lead to packet loss which makes them pass by unchecked for malware and intrusion attempts, increasing the false-negative rate. The main cause of this is the network packet inspection module in the detection engine of the NIDS. The detection engine consists of numerous functions and ultimately contains an algorithm for string searching. This thesis will focus on accelerating the NIDS by parallelizing this algorithm. In the recent years modern GPUs have evolved from being a tool that only displays highend graphics for games, to be used for general-purpose scientific and engineering computing across a range of platforms [35]. GPU computing is the short term used when ordering the GPU to take over and accelerate the computationally-intensive calculations normally done by the CPU, and instead let the CPU take care of the more sequential parts of the application. They then work together solving tasks in a heterogeneous co-processing computing model. Using Graphics Processing Units (GPU) for general-purpose scientific and engineering computing has grown exponentially the last few years. This has happened mostly from the work Nvidia has put into their CUDA platform and programming model. Some of the most common areas for use of GPU is fluid dynamics, seismic processing, molecular dynamics, computational chemistry, finance and supercomputing. Programs need to be specifically designed to run optimized on a GPU, and special programming APIs have been designed explicitly for GPU computing. The most well known ones are CUDA and OpenCL. In the recent year’s modern GPUs have evolved from being the tool that displays high-end graphics for games, to be the tool used in general-purpose scientific and engineering computing across a range of platforms. The goal of this project was to harness the power within GPUs and use it to accelerate NIDS such as Snort, by using CUDA technology. Several papers have been published on the topic of GPU acceleration, however only a handful of them targeted NIDS with varying results. We believe this can be improved dramatically by further research in how different hardware components interact and how to exploit the components and their APIs in new ways for creating high-performance algorithm solutions. We present our implementations of known string search algorithms programmed in C++ and CUDA, with analysis of these algorithms and conclude with contributions from our experiments and theoretical analysis.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: