14172

Block Time Step Storage Scheme for Astrophysical N-body Simulations

Maxwell Xu Cai, Yohai Meiron, M.B.N. Kouwenhoven, Paulina Assmann, Rainer Spurzem
National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China
arXiv:1506.07591 [astro-ph.IM], (25 Jun 2015)

@article{cai2015block,

   title={Block Time Step Storage Scheme for Astrophysical N-body Simulations},

   author={Cai, Maxwell Xu and Meiron, Yohai and Kouwenhoven, M.B.N. and Assmann, Paulina and Spurzem, Rainer},

   year={2015},

   month={jun},

   archivePrefix={"arXiv"},

   primaryClass={astro-ph.IM}

}

Download Download (PDF)   View View   Source Source   

1380

views

Astrophysical research in recent decades has made significant progress thanks to the availability of various N-body simulation techniques. With the rapid development of high-performance computing technologies, modern simulations have been able to take the computing power of massively parallel clusters with more than 10^5 GPU cores. While unprecedented accuracy and dynamical scales have been achieved, the enormous amount of data being generated continuously poses great challenges for the subsequent procedures of data analysis and archiving. As an urgent response to these challenges, in this paper we propose an adaptive storage scheme for simulation data, inspired by the block time step integration scheme found in a number of direct N-body integrators available nowadays. The proposed scheme, namely the block time step storage scheme, works by minimizing the data redundancy with assignments of data with individual output frequencies as required by the researcher. As demonstrated by benchmarks, the proposed scheme is applicable to a wide variety of simulations. Despite the main focus of developing a solution for direct N-body simulation data, the methodology is transferable for grid-based or tree-based simulations where hierarchical time stepping is used.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: