11477

Exploitation of GPUs for the Parallelisation of Probably Parallel Legacy Code

Zheng Wang, Daniel Powell, Bjorn Franke, Michael O’Boyle
School of Computing and Communications, Lancaster University, United Kingdom
23rd International Conference on Compiler Construction (CC’14), 2014

@inproceedings{zheng_cc14,

   title={Exploitation of GPUs for the Parallelisation of Probably Parallel Legacy Code},

   author={Wang, Zheng and Powel, Dainel and Franke, Bjoern and O’Boyle, Michael FP},

   booktitle={The 23rd International Conference on Compiler Construction (CC)},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

636

views

General purpose Gpus provide massive compute power, but are notoriously difficult to program. In this paper we present a complete compilation strategy to exploit Gpus for the parallelisation of sequential legacy code. Using hybrid data dependence analysis combining static and dynamic information, our compiler automatically detects suitable parallelism and generates parallel OpenCl code from sequential programs. We exploit the fact that dependence profiling provides us with parallel loop candidates that are highly likely to be genuinely parallel, but cannot be statically proven so. For the efficient Gpu parallelisation of those probably parallel loop candidates, we propose a novel software speculation scheme, which ensures correctness for the unlikely, yet possible case of dynamically detected dependence violations. Our scheme operates in place and supports speculative read and write operations. We demonstrate the effectiveness of our approach in detecting and exploiting parallelism using sequential codes from the Nas benchmark suite. We achieve an average speedup of 3.2x, and up to 99x, over the sequential baseline. On average, this is 1.42 times faster than state-of-the-art speculation schemes and corresponds to 99% of the performance level of a manual Gpu implementation developed by independent expert programmers.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: