10855

Development of Generic Scheduling Concepts for OpenGL ES 2.0

Waqas Tanveer
Institute for Parallel and Distributed Systems, Universitat Stuttgart, Universitatsstrasse 38, 70569 Stuttgart, Germany
Universitat Stuttgart, 2013

@article{tanveer2013development,

   title={Development of Generic Scheduling Concepts for OpenGL ES 2.0},

   author={Tanveer, Waqas},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1771

views

The ability of a Graphics Processing Unit (GPU) to do efficient and massively parallel computations makes it the choice for 3D graphic applications. It is been extensively used as a hardware accelerator to boost the performance of a single application like 3D games. However, due to increasing number of 3D rendering applications and the limiting resource constraints (especially on embedded platforms), such as cost and space, a single GPU needs to be shared between multiple concurrent applications (GPU multitasking). Especially for safety-relevant scenarios, like, e.g., automotive applications, certain Quality of Service (QoS) requirements, such as average frame rates and priorities, apply. In this work we analyze and discuss the requirements and concepts for the scheduling of 3D rendering commands. We therefore propose our Fine-Grained Semantics Driven Scheduling (FG-SDS) concept. Since existing GPUs cannot be preempted, the execution of GPU command blocks is selectively delayed depending on the applications priorities and frame rate requirements. As FG-SDS supports and uses the OpenGL ES 2.0 rendering API it is highly portable and flexible. We have implemented FG-SGS and evaluated its performance and effectiveness on an automotive embedded system. Our evaluations indicate that FG-SGS is able to ensure that required frame rates and deadlines of the high priority application are met, if the schedule is feasible. The overhead introduced by GPU scheduling is non-negligible but considered to be reasonable with respect to the GPU resource prioritization that we are able to achieve.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: