15479

Automatic and portable mapping of data parallel programs to OpenCL for GPU-based heterogeneous systems

Zheng Wang, Dominik Grewe, Michael F.P. O’Boyle
Lancaster University
ACM Transaction on Architecture and Code Optimization (TACO), Volume 11, Issue 4, 2015
@article{wang2015automatic,

   title={Automatic and portable mapping of data parallel programs to OpenCL for GPU-based heterogeneous systems},

   author={Wang, Zheng and Grewe, Dominik and O’boyle, Michael FP},

   journal={ACM Transactions on Architecture and Code Optimization (TACO)},

   volume={11},

   number={4},

   pages={42},

   year={2015},

   publisher={ACM}

}

Download Download (PDF)   View View   Source Source   

362

views

General purpose GPU based systems are highly attractive as they give potentially massive performance at little cost. Realizing such potential is challenging due to the complexity of programming. This article presents a compiler based approach to automatically generate optimized OpenCL code from data-parallel OpenMP programs for GPUs. A key feature of our scheme is that it leverages existing transformations, especially data transformations, to improve performance on GPU architectures and uses automatic machine learning to build a predictive model to determine if it is worthwhile running the OpenCL code on the GPU or OpenMP code on the multi-core host. We applied our approach to the entire NAS parallel benchmark suite and evaluated it on distinct GPU based systems. We achieved average (up to) speedups of 4.51x and 4.20x (143x and 67x) on a Core i7/NVIDIA GeForce GTX580 and a Core i7/AMD Radeon 7970 platforms, respectively over a sequential baseline. Our approach achieves, on average, over 10x speedups over two state-of-the-art automatic GPU code generators.
VN:F [1.9.22_1171]
Rating: 1.0/5 (1 vote cast)
Automatic and portable mapping of data parallel programs to OpenCL for GPU-based heterogeneous systems, 1.0 out of 5 based on 1 rating

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1480779964
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1480779964
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => qTPQj0x4eVzbSH9v0H5le9AvR0w=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2079 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: