11261

A GPU-based Multi-level Subspace Decomposition Scheme for Hierarchical Tensor Product Bases

Ivan Chernov
Technische Universitat Munchen
Technische Universitat Munchen, 2013
@mastersthesis{chernov13gpu-based,

   author={Chernov, Ivan},

   month={dec},

   title={A GPU-based Multi-level Subspace Decomposition Scheme for Hierarchical Tensor Product Bases},

   type={Master’s thesis},

   year={2013},

   URL={http://www5.in.tum.de/pub/chernov13masterthesis.pdf}

}

Download Download (PDF)   View View   Source Source   

727

views

The aim of this thesis is to implement a multi-level splitting of full grids on the GPU, which could be used in the incremental visualization of scientific data sets. The splitting is motivated by the approximation properties of the sparse grid technique. Looking towards large amounts of data, ideas of parallelization and data slicing are discussed and implemented. State-of-the-art implementations of the splitting algorithm are discussed and the highly parallelizable part is extracted. We compare against a highly tuned CPU version of the algorithm, and we aim to speed up the calculation as much as possible. We suppose that a higher degree of parallelism can lower the time to solution, so highly parallel GPUs sound promising as target platforms. We take general implementation ideas from the CPU version, transfer and map them to the GPU. Although the performance results of this first implementation are promising, the parallel and vectorized CPU version is still a bit faster. Still, in terms of performance the GPU implementation comes close to the highly-tuned CPU version of the algorithm. Following the approach further might prove useful for the time-critical task of visualizing (reading, processing, drawing) of large amounts of data, which needs to be as fast as possible and requires access to both coarse and fine level representation of the data.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1480816328
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1480816328
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => upgYT+ayNxZUO0f9u5e3SMyx5kg=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2079 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: