Concurrent Scheduling of High-Level Parallel Programs on Multi-GPU Systems
University of Innsbruck, Innsbruck, Austria
arXiv:2503.10516 [cs.DC], (13 Mar 2025)
@misc{knorr2025concurrentschedulinghighlevelparallel,
title={Concurrent Scheduling of High-Level Parallel Programs on Multi-GPU Systems},
author={Fabian Knorr and Philip Salzmann and Peter Thoman and Thomas Fahringer},
year={2025},
eprint={2503.10516},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2503.10516}
}
Parallel programming models can encourage performance portability by moving the responsibility for work assignment and data distribution from the programmer to a runtime system. However, analyzing the resulting implicit memory allocations, coherence operations and their interdependencies can quickly introduce delays into the latency-sensitive execution pipeline of a distributed-memory application. In this paper, we show how graph-based intermediate representations help moving such scheduling work out of the critical path. In the context of SYCL programs distributed onto accelerator clusters, we introduce the instruction graph, a low-level representation that preserves full concurrency between memory management, data transfers, MPI peer-to-peer communication and kernel invocation. Through integration within the Celerity runtime, we demonstrate how instruction-graph scheduling enables a system architecture that performs this analysis concurrently with execution. Using a scheduler lookahead mechanism, we further detect changing access patterns to optimize memory allocation in the presence of virtualized buffers. We show the effectiveness of our method through strong-scaling benchmarks with multiple Celerity applications on up to 128 GPUs in a production cluster.
March 23, 2025 by hgpu