Parallel Algorithm for Solving Kepler’s Equation on Graphics Processing Units: Application to Analysis of Doppler Exoplanet Searches

Eric B. Ford
Department of Astronomy, University of Florida, 211 Bryant Space Science Center, PO Box 112055, Gainesville, FL, 32611-2055, USA
New Astronomy, Volume 14, Issue 4, p. 406-412 (2009), arXiv:0812.2976 [astro-ph] (16 Dec 2008)


   title={Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches},

   author={Ford, E.B.},

   journal={New Astronomy},








We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the “Compute Unified Device Architecture” programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., chi^2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed-precision, our GPU code provides a speed-up factor of over 600, when evaluating N_sys > 1024 models planetary systems each containing N_pl = 4 planets and assuming N_obs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler’s equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2020 hgpu.org

All rights belong to the respective authors

Contact us: