15265

GPU Remote Memory Access Programming

Jeremia Bar
Scalable Parallel Computing Laboratory, Department of Computer Science, ETH Zurich
ETH Zurich, 2015
@phdthesis{bar2015gpu,

   title={GPU Remote Memory Access Programming},

   author={B{"a}r, Jeremia},

   year={2015},

   school={Master Thesis}

}

Download Download (PDF)   View View   Source Source   

369

views

High performance computing studies the construction and programming of computing system with tremendous computational power playing a key role in scientific computing and research across disciplines. The graphics processing unit (GPU) developed for fast 2D and 3D visualizations has turned into a programmable general purpose accelerator device boosting today’s high performance clusters. Leveraging these computational resources requires good programming model abstractions to manage system complexity. Today’s state of the art employs separate cluster communication and GPU computation models such as MPI and CUDA. The bulk-synchronous nature of CUDA and the role of the GPU as a CPU-dependent co-processor limits cluster utilization. In this master thesis we devise, implement, and evaluate the novel GPU cluster programming model GPU RMA addressing three key areas. GPU RMA introduces a simplifying abstraction exposing the cluster as a set of interacting ranks. The ranks execute on the GPU removing the complexity of explicit GPU management from the host. GPU RMA introduces communication amongst ranks on local and remote devices allowing more fine-grained communication and synchronization compared to conventional models increasing concurrency and resource usage. GPU RMA implements one-sided notified access communication leveraging the recently introduced RDMA support for GPUs providing richer communication semantics than current solutions. We implemented GPU RMA on a cluster with Intel CPUs, Nvidia GPUs, and an InfiniBand interconnect, studied the impact of system components to communication latency, and derived an empirical performance model. We performed a case study to assess GPU RMA performance compared to the established MPI and CUDA approach and validated our performance model. Our experiments show GPU RMA improves performance up to 25% compared to the state of the art and encourages further study of GPU RMA performance and scalability.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1475094677
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1475094677
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => rMuJM81udjW+2n7IeFBuk8Vpk9U=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2001 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: