15961

Massively-Parallel Lossless Data Decompression

Evangelia Sitaridi, Rene Mueller, Tim Kaldewey, Guy Lohman, Kenneth Ross
Columbia University
arXiv:1606.00519 [cs.DC], (2 Jun 2016)

@article{sitaridi2016massivelyparallel,

   title={Massively-Parallel Lossless Data Decompression},

   author={Sitaridi, Evangelia and Mueller, Rene and Kaldewey, Tim and Lohman, Guy and Ross, Kenneth},

   year={2016},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1418

views

Today’s exponentially increasing data volumes and the high cost of storage make compression essential for the Big Data industry. Although research has concentrated on efficient compression, fast decompression is critical for analytics queries that repeatedly read compressed data. While decompression can be parallelized somewhat by assigning each data block to a different process, break-through speed-ups require exploiting the massive parallelism of modern multi-core processors and GPUs for data decompression within a block. We propose two new techniques to increase the degree of parallelism during decompression. The first technique exploits the massive parallelism of GPU and SIMD architectures. The second sacrifices some compression efficiency to eliminate data dependencies that limit parallelism during decompression. We evaluate these techniques on the decompressor of the DEFLATE scheme, called Inflate, which is based on LZ77 compression and Huffman encoding. We achieve a 2X speed-up in a head-to-head comparison with several multi-core CPU-based libraries, while achieving a 17% energy saving with comparable compression ratios.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: