29211

Hierarchical Resource Partitioning on Modern GPUs: A Reinforcement Learning Approach

Urvij Saroliya, Eishi Arima, Dai Liu, Martin Schulz
Technical University of Munich, Garching, Germany
arXiv:2405.08754 [cs.DC], (14 May 2024)

@inproceedings{Saroliya_2023,

   title={Hierarchical Resource Partitioning on Modern GPUs: A Reinforcement Learning Approach},

   url={http://dx.doi.org/10.1109/CLUSTER52292.2023.00023},

   DOI={10.1109/cluster52292.2023.00023},

   booktitle={2023 IEEE International Conference on Cluster Computing (CLUSTER)},

   publisher={IEEE},

   author={Saroliya, Urvij and Arima, Eishi and Liu, Dai and Schulz, Martin},

   year={2023},

   month={oct}

}

Download Download (PDF)   View View   Source Source   

800

views

GPU-based heterogeneous architectures are now commonly used in HPC clusters. Due to their architectural simplicity specialized for data-level parallelism, GPUs can offer much higher computational throughput and memory bandwidth than CPUs in the same generation do. However, as the available resources in GPUs have increased exponentially over the past decades, it has become increasingly difficult for a single program to fully utilize them. As a consequence, the industry has started supporting several resource partitioning features in order to improve the resource utilization by co-scheduling multiple programs on the same GPU die at the same time. Driven by the technological trend, this paper focuses on hierarchical resource partitioning on modern GPUs, and as an example, we utilize a combination of two different features available on recent NVIDIA GPUs in a hierarchical manner: MPS (Multi-Process Service), a finer-grained logical partitioning; and MIG (Multi-Instance GPU), a coarse-grained physical partitioning. We propose a method for comprehensively co-optimizing the setup of hierarchical partitioning and the selection of co-scheduling groups from a given set of jobs, based on reinforcement learning using their profiles. Our thorough experimental results demonstrate that our approach can successfully set up job concurrency, partitioning, and co-scheduling group selections simultaneously. This results in a maximum throughput improvement by a factor of 1.87 compared to the time-sharing scheduling.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: