8778

Exploring Traditional and Emerging Parallel Programming Models using a Proxy Application

Ian Karlin, Abhinav Bhatele, Jeff Keasler, Bradford L. Chamberlain, Jonathan Cohen, Zachary DeVito, Riyaz Haque, Dan Laney, Edward Luke, Felix Wang, David Richards, Martin Schulz, Charles H. Still
Lawrence Livermore National Laboratory, P. O. Box 808, Livermore, California 94551 USA
IEEE International Parallel & Distributed Processing Symposium (IPDPS ’13), 2013
@article{karlin2013exploring,

   title={Exploring Traditional and Emerging Parallel Programming Models using a Proxy Application},

   author={Karlin, I. and Bhatele, A. and Keasler, J. and Chamberlain, B.L. and Cohen, J. and DeVito, Z. and Haque, R. and Laney, D. and Luke, E. and Wang, F. and others},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

941

views

Parallel computing architectures are becoming more complex with increasing core counts and more heterogeneous architectures. However, the most commonly used programming models, C/C++ with MPI and/or OpenMP, make it very difficult to write source code that is easily tuned for many targets. Newer language approaches attempt to ease this burden by providing optimization features such as computation and communication overlap, message-driven execution, automatic load balancing and implicit data layout optimizations. In this paper, we compare multiple implementations of LULESH, a proxy application for a shock hydrodynamics, to determine strengths and weaknesses of four traditional (OpenMP, MPI, MPI+OpenMP, CUDA) and four emerging (Chapel, Charm++, Liszt, Loci) programming models for parallel computation. In evaluating these programming models, we focus on programmer productivity, performance and ease of applying optimizations.
VN:F [1.9.22_1171]
Rating: 4.0/5 (4 votes cast)
Exploring Traditional and Emerging Parallel Programming Models using a Proxy Application, 4.0 out of 5 based on 4 ratings

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1472653073
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1472653073
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => /XHxXP7kgRe7P7FtHG+O7ozZkgI=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

1974 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: