16936

Astrophysical-oriented Computational multi-Architectural Framework

Dzmitry Razmyslovich
Ruperto-Carola University of Heidelberg, Germany
Ruperto-Carola University of Heidelberg, 2017

@phdthesis{razmyslovich2017astrophysicaloriented,

   title={Astrophysical-oriented Computational multi-Architectural Framework},

   author={Razmyslovich, Dzmitry},

   year={2017}

}

Download Download (PDF)   View View   Source Source   

364

views

This work presents the framework for simplifying software development in the astrophysical simulations branch – Astrophysical-oriented Computational multi-Architectural Framework (ACAF). The astrophysical simulation problems are usually approximated with the particle systems for computational purposes. The number of particles in such approximations reaches several millions, which enforces the usage of the computer clusters for the simulations. Meanwhile, the computational extensiveness of these approximations makes it reasonable to utilize the heterogeneous clusters, using Graphics Processing Units (GPUs) and Field-Programmable Gate Arrays (FPGAs) as accelerators. At the same time, developing the programs for running on heterogeneous clusters is a complicated task requiring certain expertise in network programming and parallel programming. The ACAF aims to simplify heterogeneous clusters programming by providing the user with the set of objects and functions covering some aspects of application developing. The ACAF targets the data-parallel problems and focuses on the problems approximated with particle systems. The ACAF is designed as a C++ framework and is based on the hierarchy of the components, which are responsible for the different aspects of the heterogeneous cluster programming. Extending the hierarchy with new components provides the possibility to utilize the framework for other problems, other hardware, other distribution schemes and other computational methods. Being designed as a C++ framework, the ACAF keeps open the possibility to use the existing libraries and codes. The usage example demonstrates the concept of separating the different programming aspects between the different parts of the source code. The benchmarking results reveal the execution time overhead of the program written using the framework being just 1.6% for small particle systems and approaching 0% for larger particle systems (in comparison to the bare simulation code). At the same time, the execution with different cluster configurations shows that the program performance scales almost according to the number of cluster nodes in use. These results prove the efficiency and usability of the framework implementation.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Recent source codes

* * *

* * *

TwitterAPIExchange Object
(
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
        (
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1487867382
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1487867382
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => IKeGdikWflcqKdUpQJzOvVKO8zI=
        )

    [url] => https://api.twitter.com/1.1/users/show.json
)
Follow us on Facebook
Follow us on Twitter

HGPU group

2173 peoples are following HGPU @twitter

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: