INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization Formats
The University of Hong Kong
arXiv:2510.25602 [cs.LG]
@misc{chen2025intvsfpcomprehensive,
title={INT v.s. FP: A Comprehensive Study of Fine-Grained Low-bit Quantization Formats},
author={Mengzhao Chen and Meng Wu and Hui Jin and Zhihang Yuan and Jing Liu and Chaoyi Zhang and Yunshui Li and Jie Huang and Jin Ma and Zeyue Xue and Zhiheng Liu and Xingyan Bin and Ping Luo},
year={2025},
eprint={2510.25602},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2510.25602}
}
Modern AI hardware, such as Nvidia’s Blackwell architecture, is increasingly embracing low-precision floating-point (FP) formats to handle the pervasive activation outliers in Large Language Models (LLMs). Despite this industry trend, a unified comparison of FP and integer (INT) quantization across varying granularities has been missing, leaving algorithm and hardware co-design without clear guidance. This paper fills that gap by systematically investigating the trade-offs between FP and INT formats. We reveal a critical performance crossover: while FP excels in coarse-grained quantization, the comparison at fine-grained (block-wise) levels is more nuanced. Our comprehensive comparison demonstrates that for popular 8-bit fine-grained formats (e.g., MX with block size 32), MXINT8 is superior to its FP counterpart in both algorithmic accuracy and hardware efficiency. However, for 4-bit formats, FP (e.g., MXFP4, NVFP4) often holds an accuracy advantage , though we show that NVINT4 can surpass NVFP4 when outlier-mitigation techniques like Hadamard rotation are applied. We also introduce a symmetric clipping method that resolves gradient bias in fine-grained low-bit INT training, enabling nearly lossless performance for MXINT8 training. These findings challenge the current hardware trajectory, demonstrating that a one-size-fits-all FP approach is suboptimal and advocating that fine-grained INT formats, particularly MXINT8, offer a better balance of accuracy, power, and efficiency for future AI accelerators.
November 2, 2025 by hgpu
Your response
You must be logged in to post a comment.





