29743

InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU

Heejun Lee, Geon Park, Jaduk Suh, Sung Ju Hwang
Graduate School of AI, KAIST, Seoul, Korea
arXiv:2502.08910 [cs.CL], (13 Feb 2025)

@misc{lee2025infinitehipextendinglanguagemodel,

   title={InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU},

   author={Heejun Lee and Geon Park and Jaduk Suh and Sung Ju Hwang},

   year={2025},

   eprint={2502.08910},

   archivePrefix={arXiv},

   primaryClass={cs.CL},

   url={https://arxiv.org/abs/2502.08910}

}

In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce InfiniteHiP, a novel, and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU — 3x larger — without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: