7812

A Parallel Monte Carlo Code for Simulating Collisional N-body Systems

Bharath Pattabiraman, Stefan Umbreit, Wei-Keng Liao, Alok Choudhary, Vassiliki Kalogera, Gokhan Memik, Frederic A. Rasio
Center for Interdisciplinary Exploration and Research in Astrophysics, Northwestern University, Evanston, USA
arXiv:1206.5878v1 [astro-ph.IM] (26 Jun 2012)

@article{2012arXiv1206.5878P,

   author={Pattabiraman}, B. and {Umbreit}, S. and {Liao}, W.-K. and {Choudhary}, A. and {Kalogera}, V. and {Memik}, G. and {Rasio}, F.~A.},

   title={"{A Parallel Monte Carlo Code for Simulating Collisional N-body Systems}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1206.5878},

   primaryClass={"astro-ph.IM"},

   keywords={Astrophysics – Instrumentation and Methods for Astrophysics, Astrophysics – Galaxy Astrophysics, Physics – Computational Physics},

   year={2012},

   month={jun},

   adsurl={http://adsabs.harvard.edu/abs/2012arXiv1206.5878P},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

857

views

We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N~10^7 particles. Our code is based on the the H’enon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures, and the introduction of a parallel random number generation scheme, as well as a parallel sorting algorithm, required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. The implementation uses the Message Passing Interface (MPI) library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find that our results are in good agreement with self-similar core-collapse solutions, and the core collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within less than 1% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7. The runtime reaches a saturation with the addition of more processors beyond these limits which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60x, 100x, and 220x, respectively.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: