CRIUgpu: Transparent Checkpointing of GPU-Accelerated Workloads
University of Oxford
arXiv:2502.16631 [cs.DC], (23 Feb 2025)
@misc{stoyanov2025criugputransparentcheckpointinggpuaccelerated,
title={CRIUgpu: Transparent Checkpointing of GPU-Accelerated Workloads},
author={Radostin Stoyanov and Viktória Spišaková and Jesus Ramos and Steven Gurfinkel and Andrei Vagin and Adrian Reber and Wesley Armour and Rodrigo Bruno},
year={2025},
eprint={2502.16631},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2502.16631}
}
Deep learning training at scale is resource-intensive and time-consuming, often running across hundreds or thousands of GPUs for weeks or months. Efficient checkpointing is crucial for running these workloads, especially in multi-tenant environments where compute resources are shared, and job preemptions or interruptions are common. However, transparent and unified GPU snapshots are particularly challenging because of the hardware architecture differences between CPU and GPU, including memory subsystems, dynamic parallelism, and thread synchronization. State-of-the-art GPU checkpointing techniques typically leverage mechanisms that intercept, log, and replay device API calls. However, this approach adds performance overhead and requires hardware-specific implementation that is difficult to test, maintain, and integrate with existing container platforms. In this paper, we present CRIUgpu – a novel approach for transparent checkpointing of GPU-accelerated workloads that builds on recently introduced driver capabilities, enabling support for CUDA and ROCm applications. Our evaluation results show that CRIUgpu works with a variety of deep learning and high-performance computing workloads running across multiple GPUs, completely eliminating steady-state performance overheads, and significantly reducing recovery times compared to state-of-the-art transparent GPU checkpointing mechanisms.
March 3, 2025 by hgpu