7789

GPU-accelerated Model Checking of Periodic Self-Suspending Real-Time Tasks

Per-Erik Mahl, Tim Liberg
Malardalen University, Vasteras, Sweden
Malardalen University, 2012

@article{mahl2012gpu,

   title={GPU-accelerated Model Checking of Periodic Self-Suspending Real-Time Tasks},

   author={Mahl, P.E. and Liberg, T.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

799

views

Efficient model checking is important in order to make this type of software verification useful for systems that are complex in their structure. If a system is too large or complex then model checking does not simply scale, i.e., it could take too much time to verify the system. This is one strong argument for focusing on making model checking faster. Another interesting aim is to make model checking so fast that it can be used for predicting scheduling decisions for realtime schedulers at runtime. This of course requires the model checking to complete within a order of milliseconds or even microseconds. The aim is set very high but the results of this thesis will at least give a hint on whether this seems possible or not. The magic card for (maybe) making this possible is called Graphics Processing Unit (GPU). This thesis will investigate if and how a model checking algorithm can be ported and executed on a GPU. Modern GPU architectures offers a high degree of processing power since they are equipped with up to 1000 (Nvidia GTX 590) or 3000 (Nvidia Tesla K10) processor cores. The drawback is that they offer poor thread-communication possibilities and memory caches compared to CPU. This makes it very difficult to port CPU programs to GPUs. The example model (system) used in this thesis represents a realtime task scheduler that can schedule up to three periodic self-suspending tasks. The aim is to verify, i.e., find a feasible schedule for these tasks, and do it as fast as possible with the help of the GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: