A Performance Comparison of Different Graphics Processing Units Running Direct N-Body Simulations

R. Capuzzo-Dolcettaa, M. Spera
Dep. of Physics, Sapienza, University of Roma, P.le A. Moro 5, Roma, Italy
arXiv:1304.1966 [astro-ph.IM], (7 Apr 2013)


   author={Capuzzo-Dolcetta}, R. and {Spera}, M.},

   title={"{A Performance Comparison of Different Graphics Processing Units Running Direct N-Body Simulations}"},

   journal={ArXiv e-prints},




   keywords={Astrophysics – Instrumentation and Methods for Astrophysics, Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Performance},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   Source codes Source codes




Hybrid computational architectures based on the joint power of Central Processing Units and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc.. In this paper we present a comparison of performance of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code (HiGPUs) to use for these tests, because this version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed also in scientific applications and, although with some limitations in central memory and in bandwidth, can be a good choice to implement a machine for scientific use at a very good performance to cost ratio.
No votes yet.
Please wait...

* * *

* * *

* * *

HGPU group © 2010-2022 hgpu.org

All rights belong to the respective authors

Contact us: