7882

A fully parallel, high precision, N-body code running on hybrid computing platforms

R. Capuzzo-Dolcetta, M. Spera, D. Punzo
Dep. of Physics, Sapienza, University of Roma, P.le A. Moro 1, Roma, Italy
arXiv:1207.2367v1 [astro-ph.IM] (10 Jul 2012)

@article{2012arXiv1207.2367C,

   author={Capuzzo-Dolcetta}, R. and {Spera}, M. and {Punzo}, D.},

   title={"{A fully parallel, high precision, N-body code running on hybrid computing platforms}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1207.2367},

   primaryClass={"astro-ph.IM"},

   keywords={Astrophysics – Instrumentation and Methods for Astrophysics, Computer Science – Distributed, Parallel, and Cluster Computing, Physics – Computational Physics},

   year={2012},

   month={jul},

   adsurl={http://adsabs.harvard.edu/abs/2012arXiv1207.2367C},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

We present a new implementation of the numerical integration of the classical, gravitational, N-body problem based on a high order Hermite’s integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code (called HiGPUs) is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N up to 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes. The code is freely available to the scientific community.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: