30616

Joint Training on AMD and NVIDIA GPUs

Jon Hu, Thomas Jia, Jing Zhu, Zhendong Yu
Zettabyte AI, Inc.
arXiv:2602.18007 [cs.DC], (20 Feb 2026)

@misc{hu2026joint,

   title={Joint Training on AMD and NVIDIA GPUs},

   author={Jon Hu and Thomas Jia and Jing Zhu and Zhendong Yu},

   year={2026},

   eprint={2602.18007},

   archivePrefix={arXiv},

   primaryClass={cs.DC},

   url={https://arxiv.org/abs/2602.18007}

}

Download Download (PDF)   View View   Source Source   

383

views

As large language models continue to scale, training demands on compute and system capacity grow rapidly, making single-vendor homogeneous clusters insufficient. This paper presents a technical solution for heterogeneous mixed training in AMD-NVIDIA environments. We first adopt a compatibility-oriented approach based on CPU-Forwarding Communication, with differentiated communication back-end selection across parallel groups and multi-NIC parallel data transfer. To achieve higher performance, we further propose another Device-Direct Communication approach, integrating a CPU-offloading P2P mechanism to enable direct cross-vendor GPU data transfer without host-memory staging. Experiments on LLaMA-8B and Qwen2-7B demonstrate that the proposed Device-Direct Communication approach achieves up to 98% of the throughput of an NVIDIA homogeneous system, while preserving training stability and correctness.
No votes yet.
Please wait...

You must be logged in to post a comment.

Recent source codes

* * *

* * *

HGPU group © 2010-2026 hgpu.org

All rights belong to the respective authors

Contact us: