Exploiting GPU On-chip Shared Memory for Accelerating Schedulability Analysis

Swaroop Nunna, Unmesh D. Bordoloi, Samarjit Chakraborty, Petru Eles, Zebo Peng
Tech. Univ. Munich, Munich, Germany
International Symposium on Electronic System Design (ISED), 2010


   title={Exploiting GPU On-Chip Shared Memory for Accelerating Schedulability Analysis},

   author={Nunna, S. and Bordoloi, U.D. and Chakraborty, S. and Eles, P. and Peng, Z.},

   booktitle={2010 International Symposium on Electronic System Design},





Download Download (PDF)   View View   Source Source   



Embedded electronic devices like mobile phones and automotive control units must perform under strict timing constraints. As such, schedulability analysis constitutes an important phase of the design cycle of these devices. Unfortunately, schedulability analysis for most realistic task models turn out to be computationally intractable (NP-hard). Naturally, in the recent past, different techniques have been proposed to accelerate schedulability analysis algorithms, including parallel computing on Graphics Processing Units (GPUs). However, applying traditional GPU programming methods in this context restricts the effective usage of on-chip memory and in turn imposes limitations on fully exploiting the inherent parallel processing capabilities of GPUs. In this paper, we explore the possibility of accelerating schedulability analysis algorithms on GPUs while exploiting the usage of on-chip memory. Experimental results demonstrate upto 9x speedup of our GPU-based algorithms over the implementations on sequential CPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: