{"id":16014,"date":"2016-06-21T00:09:58","date_gmt":"2016-06-20T21:09:58","guid":{"rendered":"http:\/\/hgpu.org\/?p=16014"},"modified":"2016-06-21T00:13:03","modified_gmt":"2016-06-20T21:13:03","slug":"a-parallel-algorithm-for-lzw-decompression-with-gpu-implementation","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=16014","title":{"rendered":"A Parallel Algorithm for LZW Decompression, with GPU Implementation"},"content":{"rendered":"<p>The main contribution of this paper is to present a parallel algorithm for LZW decompression and to implement it in a CUDA-enabled GPU. Since sequential LZW decompression creates a dictionary table by reading codes in a compressed file one by one, its parallelization is not an easy task. We first present a parallel LZW decompression algorithm on the CREW-PRAM. We then go on to present an efficient implementation of this parallel algorithm on a GPU. The experimental results show that our parallel LZW decompression on GeForce GTX 980 runs up to 69.4 times faster than sequential LZW decompression on a single CPU. We also show a scenario that parallel LZW decompression on a GPU can be used for accelerating big data applications.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The main contribution of this paper is to present a parallel algorithm for LZW decompression and to implement it in a CUDA-enabled GPU. Since sequential LZW decompression creates a dictionary table by reading codes in a compressed file one by one, its parallelization is not an easy task. We first present a parallel LZW decompression [&hellip;]<\/p>\n","protected":false},"author":753,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[89,3],"tags":[832,14,20,1650],"class_list":["post-16014","post","type-post","status-publish","format-standard","hentry","category-nvidia-cuda","category-paper","tag-compression","tag-cuda","tag-nvidia","tag-nvidia-geforce-gtx-980"],"views":2527,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/16014","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/753"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16014"}],"version-history":[{"count":2,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/16014\/revisions"}],"predecessor-version":[{"id":16016,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/16014\/revisions\/16016"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16014"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16014"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16014"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}