29208

Optimizing Hardware Resource Partitioning and Job Allocations on Modern GPUs under Power Caps

Eishi Arima, Minjoon Kang, Issa Saba, Josef Weidendorfer, Carsten Trinitis, Martin Schulz
Technical University of Munich, Garching, Germany
arXiv:2405.03838 [cs.DC], (6 May 2024)

@inproceedings{Arima_2022,

   series={ICPP ’22},

   title={Optimizing Hardware Resource Partitioning and Job Allocations on Modern GPUs under Power Caps},

   url={http://dx.doi.org/10.1145/3547276.3548630},

   DOI={10.1145/3547276.3548630},

   booktitle={Workshop Proceedings of the 51st International Conference on Parallel Processing},

   publisher={ACM},

   author={Arima, Eishi and Kang, Minjoon and Saba, Issa and Weidendorfer, Josef and Trinitis, Carsten and Schulz, Martin},

   year={2022},

   month={aug},

   collection={ICPP ’22}

}

Download Download (PDF)   View View   Source Source   

712

views

CPU-GPU heterogeneous systems are now commonly used in HPC (High-Performance Computing). However, improving the utilization and energy-efficiency of such systems is still one of the most critical issues. As one single program typically cannot fully utilize all resources within a node/chip, co-scheduling (or co-locating) multiple programs with complementary resource requirements is a promising solution. Meanwhile, as power consumption has become the first-class design constraint for HPC systems, such co-scheduling techniques should be well-tailored for power-constrained environments. To this end, the industry recently started supporting hardware-level resource partitioning features on modern GPUs for realizing efficient co-scheduling, which can operate with existing power capping features. For example, NVidia’s MIG (Multi-Instance GPU) partitions one single GPU into multiple instances at the granularity of a GPC (Graphics Processing Cluster). In this paper, we explicitly target the combination of hardware-level GPU partitioning features and power capping for power-constrained HPC systems. We provide a systematic methodology to optimize the combination of chip partitioning, job allocations, as well as power capping based on our scalability/interference modeling while taking a variety of aspects into account, such as compute/memory intensity and utilization in heterogeneous computational resources (e.g., Tensor Cores). The experimental result indicates that our approach is successful in selecting a near optimal combination across multiple different workloads.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: