30353

A High-Throughput GPU Framework for Adaptive Lossless Compression of Floating-Point Data

Zheng Li, Weiyan Wang, Ruiyuan Li, Chao Chen, Xianlei Long, Linjiang Zheng, Quanqing Xu, Chuanhui Yang
Chongqing University
arXiv:2511.04140 [cs.DB], (11 Nov 2025)

@misc{li2025highthroughputgpuframeworkadaptive,

   title={A High-Throughput GPU Framework for Adaptive Lossless Compression of Floating-Point Data},

   author={Zheng Li and Weiyan Wang and Ruiyuan Li and Chao Chen and Xianlei Long and Linjiang Zheng and Quanqing Xu and Chuanhui Yang},

   year={2025},

   eprint={2511.04140},

   archivePrefix={arXiv},

   primaryClass={cs.DB},

   url={https://arxiv.org/abs/2511.04140}

}

The torrential influx of floating-point data from domains like IoT and HPC necessitates high-performance lossless compression to mitigate storage costs while preserving absolute data fidelity. Leveraging GPU parallelism for this task presents significant challenges, including bottlenecks in heterogeneous data movement, complexities in executing precision-preserving conversions, and performance degradation due to anomaly-induced sparsity. To address these challenges, this paper introduces a novel GPU-based framework for floating-point adaptive lossless compression. The proposed solution employs three key innovations: a lightweight asynchronous pipeline that effectively hides I/O latency during CPU-GPU data transfer; a fast and theoretically guaranteed float-to-integer conversion method that eliminates errors inherent in floating-point arithmetic; and an adaptive sparse bit-plane encoding strategy that mitigates the sparsity caused by outliers. Extensive experiments on 12 diverse datasets demonstrate that the proposed framework significantly outperforms state-of-the-art competitors, achieving an average compression ratio of 0.299 (a 9.1% relative improvement over the best competitor), an average compression throughput of 10.82 GB/s (2.4x higher), and an average decompression throughput of 12.32 GB/s (2.4x higher).
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: