High-Performance Physics Simulations Using Multi-Core CPUs and GPGPUs in a Volunteer Computing Context

Kamran Karimi, Neil G. Dickson, Firas Hamze
D-Wave Systems Inc., 100-4401 Still Creek Drive, Burnaby, British Columbia, Canada, V5C 6G9
International Journal of High Performance Computing Applications, July 1, 2010, arXiv:1004.0023 [cs.DC] (31 Mar 2010)


   title={High-Performance Physics Simulations Using Multi-Core CPUs and GPGPUs in a Volunteer Computing Context},

   author={Karimi, K. and Dickson, N. and Hamze, F.},

   journal={International Journal of High Performance Computing Applications},



   publisher={SAGE Publications}


Download Download (PDF)   View View   Source Source   



This paper presents two conceptually simple methods for parallelizing a Parallel Tempering Monte Carlo simulation in a distributed volunteer computing context, where computers belonging to the general public are used. The first method uses conventional multi-threading. The second method uses CUDA, a graphics card computing system. Parallel Tempering is described, and challenges such as parallel random number generation and mapping of Monte Carlo chains to different threads are explained. While conventional multi-threading on CPUs is well-established, GPGPU programming techniques and technologies are still developing and present several challenges, such as the effective use of a relatively large number of threads. Having multiple chains in Parallel Tempering allows parallelization in a manner that is similar to the serial algorithm. Volunteer computing introduces important constraints to high performance computing, and we show that both versions of the application are able to adapt themselves to the varying and unpredictable computing resources of volunteers’ computers, while leaving the machines responsive enough to use. We present experiments to show the scalable performance of these two approaches, and indicate that the efficiency of the methods increases with bigger problem sizes.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: