1677

Predictive Runtime Code Scheduling for Heterogeneous Architectures

Victor Jimenez, Lluis Vilanova, Isaac Gelado, Marisa Gil, Grigori Fursin, Nacho Navarro
Barcelona Supercomputing Center (BSC)
In High Performance Embedded Architectures and Compilers, Volume 5409/2009 (2009), pp. 19-33

@article{jimenez2009predictive,

   title={Predictive runtime code scheduling for heterogeneous architectures},

   author={Jim{‘e}nez, V. and Vilanova, L. and Gelado, I. and Gil, M. and Fursin, G. and Navarro, N.},

   journal={High Performance Embedded Architectures and Compilers},

   pages={19–33},

   year={2009},

   publisher={Springer}

}

Download Download (PDF)   View View   Source Source   

1727

views

Heterogeneous architectures are currently widespread. With the advent of easy-to-program general purpose GPUs, virtually every recent desktop computer is a heterogeneous system. Combining the CPU and the GPU brings great amounts of processing power. However, such architectures are often used in a restricted way for domain-specific applications like scientific applications and games, and they tend to be used by a single application at a time. We envision future heterogeneous computing systems where all their heterogeneous resources are continuously utilized by different applications with versioned critical parts to be able to better adapt their behavior and improve execution time, power consumption, response time and other constraints at runtime. Under such a model, adaptive scheduling becomes a critical component. In this paper, we propose a novel predictive user-level scheduler based on past performance history for heterogeneous systems. We developed several scheduling policies and present the study of their impact on system performance. We demonstrate that such scheduler allows multiple applications to fully utilize all available processing resources in CPU/GPU-like systems and consistently achieve speedups ranging from 30% to 40% compared to just using the GPU in a single application mode.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: