Autotuning Tensor Contraction Computations on GPUs

Axel Rivera, Mary Hall, Paul D. Hovland, Elizabeth Jessup, Thomas Nelson, Boyana Norris
School of Computing, University of Utah, Salt Lake City, UT
University of Utah, 2014

   title={Autotuning Tensor Contraction Computations on GPUs},

   author={Rivera, Axel and Hall, Mary and Salt Lake City, UT and Hovland, Paul D and Argonne, IL and Jessup, Elizabeth and Nelson, Thomas and Boulder, CO and Norris, Boyana and Eugene, OR},



Download Download (PDF)   View View   Source Source   



We describe a framework for generating optimized GPU code for computing tensor contractions, a multidimensional generalization of matrix-matrix multiplication that arises frequently in computational science applications. Typical performance optimization strategies for such computations transform the tensors into sequences of matrix-matrix multiplications to take advantage of an optimized BLAS library, but this approach is not appropriate for small tensors. We instead develop an autotuning strategy that generates CUDA variants from a sequential implementation and identifies the best-performing variant. We compare our generated code with that of OpenACC when offloading the same computation to the GPU. The straightforward OpenACC implementation is as much as 23X slower than our automatically generated code for benchmarks representative of two large-scale tensor contraction computations, Nek5000 and NWChem. However, we show how changes in GPU thread-block decomposition and register placement of data in the OpenACC annotations can achieve comparable performance to our automatically generated code. This result highlights limitations of the OpenACC compiler in targeting GPUs for computations such as tensor contractions with small trip counts and large dimensionality. It also suggests additional optimizations that can overcome these limitations.
VN:F [1.9.22_1171]
Rating: 1.0/5 (1 vote cast)
Autotuning Tensor Contraction Computations on GPUs, 1.0 out of 5 based on 1 rating

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477299005
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477299005
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => qD0WQJwvXETdeuY/y4dyuc+Own0=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2033 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: