14470

Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

C. A. Navarro, Huang Wei, Youjin Deng
Department of Computer Science, Universidad de Chile, Santiago, Chile
arXiv:1508.06268 [physics.comp-ph], (25 Aug 2015)

@article{navarro2015adaptive,

   title={Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model},

   author={Navarro, C. A. and Wei, Huang and Deng, Youjin},

   year={2015},

   month={aug},

   archivePrefix={"arXiv"},

   primaryClass={physics.comp-ph}

}

Download Download (PDF)   View View   Source Source   

1911

views

The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate at zones where the exchange process is sporadic. Experimental results show that parallel tempering is an ideal strategy for being implemented on the GPU, and runs between one to two orders of magnitude with respect to a single-core CPU version, with multi-GPU scaling being approximately 99% efficient. The results obtained extend the possibilities of simulation to sizes of L = 32, 64 for a workstation with two GPUs.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: