26933

PILC: Practical Image Lossless Compression with an End-to-end GPU Oriented Neural Framework

Ning Kang, Shanzhao Qiu, Shifeng Zhang, Zhenguo Li, Shutao Xia
Huawei Noah’s Ark Lab
arXiv:2206.05279 [eess.IV], (10 Jun 2022)

@misc{https://doi.org/10.48550/arxiv.2206.05279,

   doi={10.48550/ARXIV.2206.05279},

   url={https://arxiv.org/abs/2206.05279},

   author={Kang, Ning and Qiu, Shanzhao and Zhang, Shifeng and Li, Zhenguo and Xia, Shutao},

   keywords={Image and Video Processing (eess.IV), Computer Vision and Pattern Recognition (cs.CV), Information Theory (cs.IT), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={PILC: Practical Image Lossless Compression with an End-to-end GPU Oriented Neural Framework},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution 4.0 International}

}

Download Download (PDF)   View View   Source Source   

119

views

Generative model based image lossless compression algorithms have seen a great success in improving compression ratio. However, the throughput for most of them is less than 1 MB/s even with the most advanced AI accelerated chips, preventing them from most real-world applications, which often require 100 MB/s. In this paper, we propose PILC, an end-to-end image lossless compression framework that achieves 200 MB/s for both compression and decompression with a single NVIDIA Tesla V100 GPU, 10 times faster than the most efficient one before. To obtain this result, we first develop an AI codec that combines auto-regressive model and VQ-VAE which performs well in lightweight setting, then we design a low complexity entropy coder that works well with our codec. Experiments show that our framework compresses better than PNG by a margin of 30% in multiple datasets. We believe this is an important step to bring AI compression forward to commercial use.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: