Exploiting Task-Parallelism on GPU Clusters via OmpSs and rCUDA Virtualization

Adrian Castello, Rafael Mayo, Judit Planas, Enrique S. Quintana-Orti
Depto. de Ingenieria y Ciencia de Computadores, Universidad Jaume I, 12071-Castellon, Spain
1st IEEE Int. Workshop on Reengineering for Parallelism in Heterogeneous Parallel Platforms (RePara), 2015

   title={Exploiting Task-Parallelism on GPU Clusters via OmpSs and rCUDA Virtualization},

   author={Castell{‘o}, Adri{‘a}n and Mayo, Rafael and Planas, Judit and Quintana-Ort{i}, Enrique S},



Download Download (PDF)   View View   Source Source   



OmpSs is a task-parallel programming model consisting of a reduced collection of OpenMP-like directives, a front-end compiler, and a runtime system. This directive-based programming interface helps developers accelerate their application’s execution, e.g. in a cluster equipped with graphics processing units (GPUs), with a low programming effort. On the other hand, the virtualization package rCUDA provides seamless and transparent remote access to any CUDA GPU in a cluster, via the CUDA Driver and Runtime programming interfaces. In this paper we investigate the hurdles and practical advantages of combining these two technologies. Our experimental study targets two cluster configurations: a system where all the GPUs are located into a single cluster node; and a cluster with the GPUs distributed among the nodes. Two applications, the Nbody particle simulation and the Cholesky factorization of a dense matrix, are employed to expose the bottlenecks and performance of a remote virtualization solution applied to these two OmpSs task-parallel codes.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477684334
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477684334
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => 8tzb20+fsWvoc5uKa3pe+DS2M+M=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2038 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: