4728

Accelerating Lossless Data Compression with GPUs

R. L. Cloud, M. L. Curry, H. L. Ward, A. Skjellum, P. Bangalore
The University of Alabama at Birmingham
arXiv:1107.1525v1 [cs.IT] (21 Jun 2011)

@article{2011arXiv1107.1525C,

   author={Cloud}, R.~L. and {Curry}, M.~L. and {Ward}, H.~L. and {Skjellum}, A. and {Bangalore}, P.},

   title={"{Accelerating Lossless Data Compression with GPUs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1107.1525},

   primaryClass={"cs.IT"},

   keywords={Computer Science – Information Theory, Computer Science – Graphics, Computer Science – Performance},

   year={2011},

   month={jun},

   adsurl={http://adsabs.harvard.edu/abs/2011arXiv1107.1525C},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1301

views

Huffman compression is a statistical, lossless, data compression algorithm that compresses data by assigning variable length codes to symbols, with the more frequently appearing symbols given shorter codes than the less. This work is a modification of the Huffman algorithm which permits uncompressed data to be decomposed into independently compressible and decompressible blocks, allowing for concurrent compression and decompression on multiple processors. We create implementations of this modified algorithm on a current NVIDIA GPU using the CUDA API as well as on a current Intel chip and the performance results are compared, showing favorable GPU performance for nearly all tests. Lastly, we discuss the necessity for high performance data compression in today’s supercomputing ecosystem.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: