{"id":26933,"date":"2022-06-19T16:15:55","date_gmt":"2022-06-19T13:15:55","guid":{"rendered":"https:\/\/hgpu.org\/?p=26933"},"modified":"2022-06-19T16:15:55","modified_gmt":"2022-06-19T13:15:55","slug":"pilc-practical-image-lossless-compression-with-an-end-to-end-gpu-oriented-neural-framework","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=26933","title":{"rendered":"PILC: Practical Image Lossless Compression with an End-to-end GPU Oriented Neural Framework"},"content":{"rendered":"<p>Generative model based image lossless compression algorithms have seen a great success in improving compression ratio. However, the throughput for most of them is less than 1 MB\/s even with the most advanced AI accelerated chips, preventing them from most real-world applications, which often require 100 MB\/s. In this paper, we propose PILC, an end-to-end image lossless compression framework that achieves 200 MB\/s for both compression and decompression with a single NVIDIA Tesla V100 GPU, 10 times faster than the most efficient one before. To obtain this result, we first develop an AI codec that combines auto-regressive model and VQ-VAE which performs well in lightweight setting, then we design a low complexity entropy coder that works well with our codec. Experiments show that our framework compresses better than PNG by a margin of 30% in multiple datasets. We believe this is an important step to bring AI compression forward to commercial use.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Generative model based image lossless compression algorithms have seen a great success in improving compression ratio. However, the throughput for most of them is less than 1 MB\/s even with the most advanced AI accelerated chips, preventing them from most real-world applications, which often require 100 MB\/s. In this paper, we propose PILC, an end-to-end [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,33,3],"tags":[1787,832,1786,20,1963],"class_list":["post-26933","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-image-processing","category-paper","tag-algorithms","tag-compression","tag-image-processing","tag-nvidia","tag-tesla-v100"],"views":1524,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/26933","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=26933"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/26933\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=26933"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=26933"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=26933"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}