Exploiting GPU On-chip Shared Memory for Accelerating Schedulability Analysis
Tech. Univ. Munich, Munich, Germany
International Symposium on Electronic System Design (ISED), 2010
@conference{nunna2010exploiting,
title={Exploiting GPU On-Chip Shared Memory for Accelerating Schedulability Analysis},
author={Nunna, S. and Bordoloi, U.D. and Chakraborty, S. and Eles, P. and Peng, Z.},
booktitle={2010 International Symposium on Electronic System Design},
pages={147–152},
year={2010},
organization={IEEE}
}
Embedded electronic devices like mobile phones and automotive control units must perform under strict timing constraints. As such, schedulability analysis constitutes an important phase of the design cycle of these devices. Unfortunately, schedulability analysis for most realistic task models turn out to be computationally intractable (NP-hard). Naturally, in the recent past, different techniques have been proposed to accelerate schedulability analysis algorithms, including parallel computing on Graphics Processing Units (GPUs). However, applying traditional GPU programming methods in this context restricts the effective usage of on-chip memory and in turn imposes limitations on fully exploiting the inherent parallel processing capabilities of GPUs. In this paper, we explore the possibility of accelerating schedulability analysis algorithms on GPUs while exploiting the usage of on-chip memory. Experimental results demonstrate upto 9x speedup of our GPU-based algorithms over the implementations on sequential CPUs.
May 3, 2011 by hgpu