2175

Pushing the Envelope: Extreme Network Coding on the GPU

Hassan Shojania, Baochun Li
Department of Electrical and Computer Engineering, University of Toronto
29th IEEE International Conference on Distributed Computing Systems, 2009. ICDCS ’09. p.490-499

@conference{shojania2009pushing,

   title={Pushing the Envelope: Extreme Network Coding on the GPU},

   author={Shojania, H. and Li, B.},

   booktitle={Distributed Computing Systems, 2009. ICDCS’09. 29th IEEE International Conference on},

   pages={490–499},

   issn={1063-6927},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1378

views

While it is well known that network coding achieves optimal flow rates in multicast sessions, its potential for practical use has remained to be a question, due to its high computational complexity. With GPU computing gaining momentum as a result of increased hardware capabilities and improved programmability, we show in this paper how the GPU can be used to improve network coding performance dramatically. Our previous work presented the first attempt in the literature to maximize the performance of network coding by taking advantage of not only multi-core CPUs, but also hundreds of computing cores in commodity off-the-shelf Graphics Processing Units (GPU). This paper represents another step forward, and presents a new array of GPU-based algorithms that improve network encoding by a factor of 2.2, and network decoding by a factor of 2.7 to 27.6 across a range of practical configurations. With just a single NVIDIA GTX 280 GPU, our implementation of GPU-based network encoding outperforms an 8-core Intel Xeon server by a margin of at least 4.3 to 1 in all practical test cases, and over 3000 peers can be served at high-quality video rates if network coding is used in a streaming server. With 128 blocks, for example, coding rates up to 294 MB/second can be achieved with a variety of block sizes.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: