Block Time Step Storage Scheme for Astrophysical N-body Simulations

Maxwell Xu Cai, Yohai Meiron, M.B.N. Kouwenhoven, Paulina Assmann, Rainer Spurzem
National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China
arXiv:1506.07591 [astro-ph.IM], (25 Jun 2015)

   title={Block Time Step Storage Scheme for Astrophysical N-body Simulations},

   author={Cai, Maxwell Xu and Meiron, Yohai and Kouwenhoven, M.B.N. and Assmann, Paulina and Spurzem, Rainer},






Download Download (PDF)   View View   Source Source   



Astrophysical research in recent decades has made significant progress thanks to the availability of various N-body simulation techniques. With the rapid development of high-performance computing technologies, modern simulations have been able to take the computing power of massively parallel clusters with more than 10^5 GPU cores. While unprecedented accuracy and dynamical scales have been achieved, the enormous amount of data being generated continuously poses great challenges for the subsequent procedures of data analysis and archiving. As an urgent response to these challenges, in this paper we propose an adaptive storage scheme for simulation data, inspired by the block time step integration scheme found in a number of direct N-body integrators available nowadays. The proposed scheme, namely the block time step storage scheme, works by minimizing the data redundancy with assignments of data with individual output frequencies as required by the researcher. As demonstrated by benchmarks, the proposed scheme is applicable to a wide variety of simulations. Despite the main focus of developing a solution for direct N-body simulation data, the methodology is transferable for grid-based or tree-based simulations where hierarchical time stepping is used.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

TwitterAPIExchange Object
    [oauth_access_token:TwitterAPIExchange:private] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
    [oauth_access_token_secret:TwitterAPIExchange:private] => o29ji3VLVmB6jASMqY8G7QZDCrdFmoTvCDNNUlb7s
    [consumer_key:TwitterAPIExchange:private] => TdQb63pho0ak9VevwMWpEgXAE
    [consumer_secret:TwitterAPIExchange:private] => Uq4rWz7nUnH1y6ab6uQ9xMk0KLcDrmckneEMdlq6G5E0jlQCFx
    [postfields:TwitterAPIExchange:private] => 
    [getfield:TwitterAPIExchange:private] => ?cursor=-1&screen_name=hgpu&skip_status=true&include_user_entities=false
    [oauth:protected] => Array
            [oauth_consumer_key] => TdQb63pho0ak9VevwMWpEgXAE
            [oauth_nonce] => 1477286029
            [oauth_signature_method] => HMAC-SHA1
            [oauth_token] => 301967669-yDz6MrfyJFFsH1DVvrw5Xb9phx2d0DSOFuLehBGh
            [oauth_timestamp] => 1477286029
            [oauth_version] => 1.0
            [cursor] => -1
            [screen_name] => hgpu
            [skip_status] => true
            [include_user_entities] => false
            [oauth_signature] => WG0RjWlidxLdJ2xMfVFJ5l5FROY=

    [url] => https://api.twitter.com/1.1/users/show.json
Follow us on Facebook
Follow us on Twitter

HGPU group

2033 peoples are following HGPU @twitter

HGPU group © 2010-2016 hgpu.org

All rights belong to the respective authors

Contact us: