Debunking the CUDA Myth Towards GPU-based AI Systems
KAIST
arXiv:2501.00210 [cs.DC], (31 Dec 2024)
@misc{lee2024debunkingcudamythgpubased,
title={Debunking the CUDA Myth Towards GPU-based AI Systems},
author={Yunjae Lee and Juntaek Lim and Jehyeon Bang and Eunyeong Cho and Huijong Jeong and Taesu Kim and Hyungjun Kim and Joonhyung Lee and Jinseop Im and Ranggi Hwang and Se Jung Kwon and Dongsoo Lee and Minsoo Rhu},
year={2024},
eprint={2501.00210},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2501.00210}
}
With the rise of AI, NVIDIA GPUs have become the de facto standard for AI system design. This paper presents a comprehensive evaluation of Intel Gaudi NPUs as an alternative to NVIDIA GPUs for AI model serving. First, we create a suite of microbenchmarks to compare Intel Gaudi-2 with NVIDIA A100, showing that Gaudi-2 achieves competitive performance not only in primitive AI compute, memory, and communication operations but also in executing several important AI workloads end-to-end. We then assess Gaudi NPU’s programmability by discussing several software-level optimization strategies to employ for implementing critical FBGEMM operators and vLLM, evaluating their efficiency against GPU-optimized counterparts. Results indicate that Gaudi-2 achieves energy efficiency comparable to A100, though there are notable areas for improvement in terms of software maturity. Overall, we conclude that, with effective integration into high-level AI frameworks, Gaudi NPUs could challenge NVIDIA GPU’s dominance in the AI server market, though further improvements are necessary to fully compete with NVIDIA’s robust software ecosystem.
January 6, 2025 by hgpu