16642

Efficient molecular dynamics simulations with many-body potentials on graphics processing units

Zheyong Fan, Wei Chen, Ville Vierimaa, Ari Harju
School of Mathematics and Physics, Bohai University, Jinzhou, China
arXiv:1610.03343 [physics.comp-ph], (11 Oct 2016)

@article{fan2016efficient,

   title={Efficient molecular dynamics simulations with many-body potentials on graphics processing units},

   author={Fan, Zheyong and Chen, Wei and Vierimaa, Ville and Harju, Ari},

   year={2016},

   month={oct},

   archivePrefix={"arXiv"},

   primaryClass={physics.comp-ph}

}

Download Download (PDF)   View View   Source Source   

2158

views

Graphics processing units have been extensively used to accelerate classical molecular dynamics simulations. However, there is much less progress on the acceleration of force evaluations for many-body potentials compared to pairwise ones. In the conventional force evaluation algorithm for many-body potentials, the force, virial stress, and heat current for a given atom are accumulated within different loops, which could result in write conflict between different threads in a CUDA kernel. In this work, we provide a new force evaluation algorithm, which is based on an explicit pairwise force expression for many-body potentials derived recently [Phys. Rev. B 92 (2015) 094301]. In our algorithm, the force, virial stress, and heat current for a given atom can be accumulated within a single thread and is free of write conflicts. We discuss the formulations and algorithms and evaluate their performance. A new open-source code, GPUMD, is developed based on the proposed formulations. For the Tersoff many-body potential, the double precision performance of GPUMD using a Tesla K40 card is equivalent to that of the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) molecular dynamics code running with about 100 CPU cores (Intel Xeon CPU X5670 @ 2.93 GHz) and the single-precision performance of GPUMD reaches $10^8$ atom * step / second.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: