Autotuning Stencils Codes with Algorithmic Skeletons

Chris Cummins
Institute of Computing Systems Architecture, School of Informatics, University of Edinburgh
University of Edinburgh, 2015

   title={Autotuning Stencils Codes with Algorithmic Skeletons},

   author={Cummins, Chris},



The physical limitations of microprocessor design have forced the industry towards increasingly heterogeneous architectures to extract performance. This trend has not been matched with software tools to cope with such parallelism, leading to a growing disparity between the levels of available performance and the ability for application developers to exploit it. Algorithmic skeletons simplify parallel programming by providing high-level, reusable patterns of computation. Achieving performant skeleton implementations is a difficult task; developers must attempt to anticipate and tune for a wide range of architectures and use cases. This results in implementations that target the general case and cannot provide the performance advantages that are gained from tuning low level optimisation parameters. To address this, I present OmniTune – an extensible and distributed framework for runtime autotuning of optimisation parameters. Targeting the workgroup size of OpenCL kernels, I demonstrate an implementation of OmniTune for stencil codes on CPUs and multi-GPU systems. I show in a comprehensive evaluation of 2.7×10^5 test cases that simple heuristics cannot provide portable performance across the range of architectures, kernels, and datasets which algorithmic skeletons must target. OmniTune uses procedurally generated synthetic benchmarks and machine learning to predict workgroup sizes for unseen programs. In an evaluation of 429 combinations of programs, architectures, and datasets, with up to 7.3×10^3 parameter values for each, OmniTune is able to achieve a median 94% of the available performance, providing a 1.33x speedup over the values selected by human experts, without requiring any user intervention. This adaptive tuning provides a median speedup of 3.79x (max 74.0x) over the best possible performance which can be achieved without autotuning.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477333198
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477333198
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => HYMoacDjool5LayXACiAMjqDayY=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2033 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: