OmpSs task offload

Florentino Sainz
Facultat d’Informatica de Barcelona (FIB), Universitat Politecnica de Catalunya
Universitat Politecnica de Catalunya, 2015

   title={OmpSs Task offload},

   author={Sainz Manteca, Florentino},


   publisher={Universitat Polit{`e}cnica de Catalunya}


Download Download (PDF)   View View   Source Source   



Exascale performance requires a level of energy efficiency only achievable with specialized hardware. Hence, to build a general purpose HPC system with exascale performance different types of processors, memory technologies and interconnection networks will be necessary. Heterogeneous hardware is already present on some top supercomputer systems that are composed of different compute nodes, which at the same time, contains different types of processors and memories. However, heterogeneous hardware is much harder to manage and exploit than homogeneous hardware, further increasing the complexity of the applications that run on HPC systems. Most HPC applications use MPI to implement a rigid Single Program Multiple Data (SPMD) execution model that no longer fits the heterogeneous nature of the underlying hardware. However, MPI provides a powerful and flexible MPI Comm spawn API call that was designed to exploit dynamically heterogeneous hardware but at the expense of a higher complexity, which has hindered a wider adoption of this API. In this master thesis, we have extended the OmpSs programming model to offload dynamically MPI kernels, replacing the low-level and error prone MPI Comm spawn call with the high-level and easy to use OmpSs pragmas. The evaluation shows that our proposal dramatically simplifies the dynamic offloading of MPI kernels while keeping the same performance and scalability as MPI_Comm_spawn.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477339375
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477339375
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => +8mHSJtYeiw8Mi38aadZbrmA+1w=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2033 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: