High Performance Direct Gravitational N-body Simulations on Graphics Processing Units

Simon P. Zwart, Robert Belleman, Peter Geldof
Section Computational Science, University of Amsterdam, Amsterdam, The Netherlands
New Astronomy, Volume 12, Issue 8, November 2007, Pages 641-650, arXiv:cs/0702135v1 [cs.PF] (23 Feb 2007)


   title={High-performance direct gravitational N-body simulations on graphics processing units},

   author={Portegies Zwart, S.F. and Belleman, R.G. and Geldof, P.M.},

   journal={New Astronomy},








Download Download (PDF)   View View   Source Source   Source codes Source codes



We present the results of gravitational direct $N$-body simulations using the commercial graphics processing units (GPU) NVIDIA Quadro FX1400 and GeForce 8800GTX, and compare the results with GRAPE-6Af special purpose hardware. The force evaluation of the $N$-body problem was implemented in Cg using the GPU directly to speed-up the calculations. The integration of the equations of motions were, running on the host computer, implemented in C using the 4th order predictor-corrector Hermite integrator with block time steps. We find that for a large number of particles ($N apgt 10^4$) modern graphics processing units offer an attractive low cost alternative to GRAPE special purpose hardware. A modern GPU continues to give a relatively flat scaling with the number of particles, comparable to that of the GRAPE. Using the same time step criterion the total energy of the $N$-body system was conserved better than to one in $10^6$ on the GPU, which is only about an order of magnitude worse than obtained with GRAPE. For $Napgt 10^6$ the GeForce 8800GTX was about 20 times faster than the host computer. Though still about an order of magnitude slower than GRAPE, modern GPU’s outperform GRAPE in their low cost, long mean time between failure and the much larger onboard memory; the GRAPE-6Af holds at most 256k particles whereas the GeForce 8800GTF can hold 9 million particles in memory.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: