16958

GraviDy: a GPU modular, parallel N-body integrator

Cristian Maureira-Fredes, Pau Amaro-Seoane
Max Planck Institute for Gravitational Physics (Albert-Einstein-Institut), D-14476 Potsdam, Germany
arXiv:1702.00440 [astro-ph.IM], (1 Feb 2017)

@article{maureira-fredes2017gravidy,

   title={GraviDy: a GPU modular, parallel $N-$body integrator},

   author={Maureira-Fredes, Cristian and Amaro-Seoane, Pau},

   year={2017},

   month={feb},

   archivePrefix={"arXiv"},

   primaryClass={astro-ph.IM}

}

A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles ($Ngtrsim 10^{6}$) under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and the detection of sources of gravitational radiation. The direct-summation of $N$ gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. Here we present a new GPU, direct-summation $N-$body integrator written from scratch and based on this scheme. This code has high modularity, allowing users to readily introduce new physics, it exploits available high-performance computing resources and will be maintained by public, regular updates. The code can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single GPU version runs about 200 times faster compared to the single CPU version. A test run using 4 GPUs in parallel shows a speed up factor of about 3 as compared to the single GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: