2732

Software Pipelined Execution of Stream Programs on GPUs

Abhishek Udupa, R. Govindarajan, Matthew J. Thazhuthaveeti
Dept. of Computer Science and Automation, Indian Institute of Science, Bangalore, India
International Symposium on Code Generation and Optimization, 2009. CGO 2009, p.200-209

@conference{udupa2009software,

   title={Software pipelined execution of stream programs on GPUs},

   author={Udupa, A. and Govindarajan, R. and Thazhuthaveetil, M.J.},

   booktitle={Code Generation and Optimization, 2009. CGO 2009. International Symposium on},

   pages={200–209},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1415

views

The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern graphics processing units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem – both scheduling and assignment of filters to processors – as an efficient integer linear program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: