2316

Practical Random Linear Network Coding on GPUs

Xiaowen Chu, Kaiyong Zhao and Mea Wang
Department of Computer Science, Hong Kong Baptist University, Hong Kong, P.R.C.
NETWORKING 2009, Lecture Notes in Computer Science, 2009, Volume 5550/2009, 573-585

@article{chu2009practical,

   title={Practical random linear network coding on GPUs},

   author={Chu, X. and Zhao, K. and Wang, M.},

   journal={NETWORKING 2009},

   pages={573–585},

   year={2009},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

533

views

Recently, random linear network coding has been widely applied in peer-to-peer network applications. Instead of sharing the raw data with each other, peers in the network produce and send encoded data to each other. As a result, the communication protocols have been greatly simplified, and the applications experience higher end-to-end throughput and better robustness to network churns.Since it is difficult to verify the integrity of the encoded data, such systems can suffer from the famous pollution attack, in which a malicious node can send bad encoded blocks that consist of bogus data. Consequently, the bogus data will be propagated into the whole network at an exponential rate. Homomorphic hash functions (HHFs) have been designed to defend systems from such pollution attacks, but with a new challenge: HHFs require that network coding must be performed in GF(q), where q is a very large prime number. This greatly increases the computational cost of network coding, in addition to the already computational expensive HHFs. This paper exploits the potential of the huge computing power of Graphic Processing Units (GPUs) to reduce the computational cost of network coding and homomorphic hashing. With our network coding and HHF implementation on GPU, we observed significant computational speedup in comparison with the best CPU implementation. This implementation can lead to a practical solution for defending against the pollution attacks in distributed systems.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: