7427

RGEM: A Responsive GPGPU Execution Model for Runtime Engines

Shinpei Kato, Karthik Lakshmanan, Aman Kumar, Mihir Kelkar, Yutaka Ishikawa, Ragunathan (Raj) Rajkumar
Department of Computer Science, University of California Santa Cruz
IEEE 32nd Real-Time Systems Symposium (RTSS), 2011

@inproceedings{kato2011rgem,

   title={RGEM: A responsive GPGPU execution model for runtime engines},

   author={Kato, S. and Lakshmanan, K. and Kumar, A. and Kelkar, M. and Ishikawa, Y. and Rajkumar, R.R.},

   booktitle={Real-Time Systems Symposium (RTSS), 2011 IEEE 32nd},

   pages={57–66},

   year={2011},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1861

views

General-purpose computing on graphics processing units, also known as GPGPU, is a burgeoning technique to enhance the computation of parallel programs. Applying this technique to real-time applications, however, requires additional support for timeliness of execution. In particular, the non-preemptive nature of GPGPU, associated with copying data to/from the device memory and launching code onto the device, needs to be managed in a timely manner. In this paper, we present a responsive GPGPU execution model (RGEM), which is a user-space runtime solution to protect the response times of high-priority GPGPU tasks from competing workload. RGEM splits a memory-copy transaction into multiple chunks so that preemption points appear at chunk boundaries. It also ensures that only the highest-priority GPGPU task launches code onto the device at any given time, to avoid performance interference caused by concurrent launches. A prototype implementation of an RGEM-based CUDA runtime engine is provided to evaluate the real-world impact of RGEM. Our experiments demonstrate that the response times of high-priority GPGPU tasks can be protected under RGEM, whereas their response times increase in an unbounded fashion without RGEM support, as the data sizes of competing workload increase.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: