16916

Light Loss-Less Data Compression, with GPU Implementation

Shunji Funasaka, Koji Nakano, Yasuaki Ito
Hiroshima University
In book: Algorithms and Architectures for Parallel Processing, pp.281-294, 2016

@incollection{funasaka2016light,

   title={Light Loss-Less Data Compression, with GPU Implementation},

   author={Funasaka, Shunji and Nakano, Koji and Ito, Yasuaki},

   booktitle={Algorithms and Architectures for Parallel Processing},

   pages={281–294},

   year={2016},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

780

views

There is no doubt that data compression is very important in computer engineering. However, most lossless data compression and decompression algorithms are very hard to parallelize, because they use dictionaries updated sequentially. The main contribution of this paper is to present a new lossless data compression method that we call Light Loss-Less (LLL) compression. It is designed so that decompression can be highly parallelized and run very efficiently on the GPU. This makes sense for many applications in which compressed data is read and decompressed many times and decompression performed more frequently than compression. We show optimal sequential and parallel algorithms for LLL decompression and implement them to run on Core i7-4790 CPU and GeForce GTX 1080 GPU, respectively. To show the potentiality of LLL compression method, we have evaluated the running time using five images and compared with well-known compression methods LZW and LZSS. Our GPU implementation of LLL decompression runs 91.1-176 times faster than the CPU implementation. Also, the running time on the GPU of our experiments show that LLL decompression is 2.49-9.13 times faster than LZW decompression and 4.30-14.1 times faster that LZSS decompression, although their compression ratios are comparable.
VN:F [1.9.22_1171]
Rating: 3.7/5 (3 votes cast)
Light Loss-Less Data Compression, with GPU Implementation, 3.7 out of 5 based on 3 ratings

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: