8474

Massively parallel Monte Carlo for many-particle simulations on GPUs

Joshua A. Anderson, Eric Jankowski, Thomas L. Grubb, Michael Engel, Sharon C. Glotzer
Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
arXiv:1211.1646 [physics.comp-ph] (7 Nov 2012)

@article{2012arXiv1211.1646A,

   author={Anderson}, J.~A. and {Jankowski}, E. and {Grubb}, T.~L. and {Engel}, M. and {Glotzer}, S.~C.},

   title={"{Massively parallel Monte Carlo for many-particle simulations on GPUs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1211.1646},

   primaryClass={"physics.comp-ph"},

   keywords={Physics – Computational Physics, Condensed Matter – Materials Science, Condensed Matter – Soft Condensed Matter, Condensed Matter – Statistical Mechanics},

   year={2012},

   month={nov},

   adsurl={http://adsabs.harvard.edu/abs/2012arXiv1211.1646A},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

2434

views

Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a GeForce GTX 680, our GPU implementation executes 95 times faster than on a single Intel Xeon E5540 CPU core, enabling 17 times better performance per dollar and cutting energy usage by a factor of 10.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: