29714

Profiling Apple Silicon Performance for ML Training

Dahua Feng, Zhiming Xu, Rongxiang Wang, Felix Xiaozhu Lin
University of Virginia
arXiv:2501.14925 [cs.PF], (28 Jan 2025)

@misc{feng2025profilingapplesiliconperformance,

   title={Profiling Apple Silicon Performance for ML Training},

   author={Dahua Feng and Zhiming Xu and Rongxiang Wang and Felix Xiaozhu Lin},

   year={2025},

   eprint={2501.14925},

   archivePrefix={arXiv},

   primaryClass={cs.PF},

   url={https://arxiv.org/abs/2501.14925}

}

Download Download (PDF)   View View   Source Source   

787

views

Apple Silicon has attracted much attention for its performance and role in machine learning (ML) training. Unlike NVIDIA GPUs, which have traditionally dominated ML training, Apple Silicon has a significant difference in memory architecture. It uses Unified Memory, which integrates CPU and GPU memory instead of separate CPU memory and GPU VRAM. However, it is difficult to tell whether Unified Memory means more performance benefits. This paper investigates the performance differences by training several large language model (LLM) workloads end-to-end under different memory scenarios. The results show a significant performance gap between Apple Silicon and NVIDIA GPUs. This paper attributes this gap to system-level factors such as page faults, power consumption, and kernel launch time. In addition, the performance difference of basic linear algebra subprograms (BLAS) on the NVIDIA GPUs and Apple Silicon chips is analyzed to further explain the observed gap.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: