9210

Fast Makespan Estimation for GPU Threads on a Single Streaming Multiprocessor

Kostiantyn Berezovskyi, Konstantinos Bletsas, Stefan M. Petters
CISTER Research Unit, Polytechnic Institute of Porto (ISEP-IPP), Rua Dr. Antonio Bernardino de Almeida, 431, 4200-072 Porto, Portugal
Technical Report CISTER-TR-130406, 2013
@techreport{berezovskyi2013fast,

   title={Fast Makespan Estimation for GPU Threads on a Single Streaming Multiprocessor},

   author={Berezovskyi, Kostiantyn and Bletsas, Konstantinos and Petters, Stefan M},

   year={2013},

   institution={Technical Report HURRAYTR-111215, CISTER/INESC-TEC, ISEP Research Center, Polytechnic Institute of Porto, Available at http://www. cister. isep. ipp. pt/people/Kostiantyn% 2BBerezovskyi/publications}

}

Download Download (PDF)   View View   Source Source   

636

views

Graphics Processing Units (GPUs) are widely used to unload the CPUs, liberate other resources of a given computer system, and provide an alternative to multiprocessor computers as a means of processing computationally expensive parallel tasks. The recent trend of utilizing GPUs in embedded systems necessitates the development of timing analysis techniques for finding the joint worst-case execution time for a group of GPU threads of the same parallel application, on a streaming multiprocessor. The state-of-the-art approaches for computing the exact maximum makespan of GPU threads running on a single streaming multiprocessor are intractable and even pessimistic approximations usually take a long time to complete. We therefore develop a technique for finding an estimate of the maximum makespan using metaheuristics. Its simplicity, flexibility and ability for massive parallelization, determine a potential of usage for soft real-time systems.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Follow us on Twitter

HGPU group

1865 peoples are following HGPU @twitter

Like us on Facebook

HGPU group

409 people like HGPU on Facebook

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: