5700

PGEM: Preemptive GPGPU Execution Model for Runtime Engines

Shinpei Kato, Karthik Lakshmanan, Aman Kumar, Mihir Kelkar, Yutaka Ishikawa, Ragunathan (Raj) Rajkumar
Department of Computer Science, University of California Santa Cruz
Proceedings of the 32nd IEEE Real-Time Systems Symposium (RTSS’11), 2011

@inproceedings{kato2011pgem,

   title={PGEM: Preemptive GPGPU Execution Model for Runtime Engines},

   author={Kato, S. and Lakshmanan, K. and Ishikawa, Y. and Rajkumar, R.},

   booktitle={Proc. of the 32nd IEEE Real-Time Systems Symposium},

   year={2011}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1814

views

General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the device, needs to be managed in a timely manner. In this paper, we present a responsive GPGPU execution model (RGEM), which is a user-space runtime solution to protect the response times of high-priority GPGPU tasks from competing workload. RGEM splits a memory-copy transaction into multiple chunks so that preemption points appear at chunk boundaries. It also ensures that only the highest-priority GPGPU task launches code onto the device at any given time, to avoid performance interference caused by concurrent launches. A prototype implementation of an RGEM-based CUDA runtime engine is provided to evaluate the real-world impact of RGEM. Our experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: