GASPP: A GPU-Accelerated Stateful Packet Processing Framework

Giorgos Vasiliadis, Lazaros Koromilas, Michalis Polychronakis, Sotiris Ioannidis
USENIX Annual Technical Conference (ATC), 2014


   title={GASPP: A GPU-Accelerated Stateful Packet Processing Framework},

   author={Vasiliadis, Giorgos and Koromilas, Lazaros and Polychronakis, Michalis and Ioannidis, Sotiris},

   booktitle={2014 USENIX Annual Technical Conference (USENIX ATC 14)},


   organization={USENIX Association}


Download Download (PDF)   View View   Source Source   



Graphics processing units (GPUs) are a powerful platform for building high-speed network traffic processing applications using low-cost hardware. Existing systems tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic operations and pattern matching. However, they still suffer from significant overheads due to critical-path operations that are still being carried out on the CPU, and redundant inter-device data transfers. In this paper we present GASPP, a programmable network traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPU-based implementations of a broad range of operations commonly used in network traffic processing applications, including the first purely GPU-based implementation of network flow tracking and TCP stream reassembly. GASPP also employs novel mechanisms for tackling control flow irregularities across SIMT threads, and sharing memory context between the network interface and the GPU. Our evaluation shows that GASPP can achieve multi-gigabit traffic forwarding rates even for computationally intensive and complex network operations such as stateful traffic classification, intrusion detection, and packet encryption. Especially when consolidating multiple network applications on the same device, GASPP achieves up to 16.2x speedup compared to standalone GPU-based implementations of the same applications.
Rating: 2.3/5. From 5 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: