Accuracy and performance of graphics processors: A Quantum Monte Carlo application case study

Jeremy S. Meredith, Gonzalo Alvarez, Thomas A. Maier, Thomas C. Schulthess, Jeffrey S. Vetter
Oak Ridge National Laboratory, 1 Bethel Valley Road, MS 6173 Oak Ridge, TN 37831, USA
Parallel Computing, Vol. 35, No. 3. (March 2009), pp. 151-163


   title={Accuracy and performance of graphics processors: a quantum monte carlo application case study},

   author={Meredith, J.S. and Alvarez, G. and Maier, T.A. and Schulthess, T.C. and Vetter, J.S.},

   journal={Parallel Computing},








Source Source   



The tradeoffs of accuracy and performance are as yet an unsolved problem when dealing with Graphics Processing Units (GPUs) as a general-purpose computation device. Their high performance and low cost makes them a desirable target for scientific computation, and new language efforts help address the programming challenges of data parallel algorithms and memory management. But the original task of GPUs – real-time rendering – has traditionally kept accuracy as a secondary goal, and sacrifices have sometimes been made as a result. In fact, the widely deployed hardware is generally capable of only single precision arithmetic, and even this accuracy is not necessarily equivalent to that of a commodity CPU. In this paper, we investigate the accuracy and performance characteristics of GPUs, including results from a preproduction double precision-capable GPU. We then accelerate the full Quantum Monte Carlo simulation code DCA++, similarly investigating its tolerance to the precision of arithmetic delivered by GPUs. The results show that while DCA++ has some sensitivity to the arithmetic precision, the single-precision GPU results were comparable to single-precision CPU results. Acceleration of the code on a fully GPU-enabled cluster showed that any remaining inaccuracy in GPU precision was negligible; sufficient accuracy was retained for scientifically meaningful results while still showing significant speedups.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: