5810

Implementation of a High Throughput 3GPP Turbo Decoder on GPU

Michael Wu, Yang Sun, Guohui Wang, Joseph R. Cavallaro
Rice University, Houston, TX, USA
Journal of Signal Processing Systems, 2011

@article{wu2011implementation,

   title={Implementation of a High Throughput 3GPP Turbo Decoder on GPU},

   author={Wu, M. and Sun, Y. and Wang, G. and Cavallaro, J.R.},

   journal={Journal of Signal Processing Systems},

   pages={1–13},

   year={2011},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

2384

views

Turbo code is a computationally intensive channel code that is widely used in current and upcoming wireless standards. General-purpose graphics processor unit (GPGPU) is a programmable commodity processor that achieves high performance computation power by using many simple cores. In this paper, we present a 3GPP LTE compliant Turbo decoder accelerator that takes advantage of the processing power of GPU to offer fast Turbo decoding throughput. Several techniques are used to improve the performance of the decoder. To fully utilize the computational resources on GPU, our decoder can decode multiple codewords simultaneously, divide the workload for a single codeword across multiple cores, and pack multiple codewords to fit the single instruction multiple data (SIMD) instruction width. In addition, we use shared memory judiciously to enable hundreds of concurrent multiple threads while keeping frequently used data local to keep memory access fast. To improve efficiency of the decoder in the high SNR regime, we also present a low complexity early termination scheme based on average extrinsic LLR statistics. Finally, we examine how different workload partitioning choices affect the error correction performance and the decoder throughput.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: