30315

Collective Communication for 100k+ GPUs

Min Si, Pavan Balaji, Yongzhou Chen, Ching-Hsiang Chu, Adi Gangidi, Saif Hasan, Subodh Iyengar, Dan Johnson, Bingzhe Liu, Jingliang Ren, Ashmitha Jeevaraj Shetty, Greg Steinbrecher, Xinfeng Xie, Yulun Wang, Bruce Wu, Jingyi Yang, Mingran Yang, Minlan Yu, Cen Zhao, Wes Bland, Denis Boyda, Suman Gumudavelli, Cristian Lumezanu, Rui Miao, Zhe Qu, Venkat Ramesh, Maxim Samoylov, Jan Seidel, Feng Tian, Qiye Tan, Shuqiang Zhang, Yimeng Zhao, Shengbao Zheng, Art Zhu, Hongyi Zeng
Meta
arXiv:2510.20171 [cs.DC], (23 Oct 2025)

@misc{si2025collectivecommunication100kgpus,

   title={Collective Communication for 100k+ GPUs},

   author={Min Si and Pavan Balaji and Yongzhou Chen and Ching-Hsiang Chu and Adi Gangidi and Saif Hasan and Subodh Iyengar and Dan Johnson and Bingzhe Liu and Jingliang Ren and Ashmitha Jeevaraj Shetty and Greg Steinbrecher and Xinfeng Xie and Yulun Wang and Bruce Wu and Jingyi Yang and Mingran Yang and Minlan Yu and Cen Zhao and Wes Bland and Denis Boyda and Suman Gumudavelli and Cristian Lumezanu and Rui Miao and Zhe Qu and Venkat Ramesh and Maxim Samoylov and Jan Seidel and Feng Tian and Qiye Tan and Shuqiang Zhang and Yimeng Zhao and Shengbao Zheng and Art Zhu and Hongyi Zeng},

   year={2025},

   eprint={2510.20171},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2510.20171}

}

The increasing scale of large language models (LLMs) necessitates highly efficient collective communication frameworks, particularly as training workloads extend to hundreds of thousands of GPUs. Traditional communication methods face significant throughput and latency limitations at this scale, hindering both the development and deployment of state-of-the-art models. This paper presents the NCCLX collective communication framework, developed at Meta, engineered to optimize performance across the full LLM lifecycle, from the synchronous demands of large-scale training to the low-latency requirements of inference. The framework is designed to support complex workloads on clusters exceeding 100,000 GPUs, ensuring reliable, high-throughput, and low-latency data exchange. Empirical evaluation on the Llama4 model demonstrates substantial improvements in communication efficiency. This research contributes a robust solution for enabling the next generation of LLMs to operate at unprecedented scales.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: