1478

Domain Decomposition method on GPU cluster

Yusuke Osaki, Ken-Ichi Ishikawa
Graduate School of Science, Hiroshima University, Kagamiyama 1-3-1, Higashi-Hiroshima, 739-8526, Japan
arXiv:1011.3318 [hep-lat] (15 Nov 2010)

@article{2010arXiv1011.3318O,

   author={Osaki}, Y. and {Ishikawa}, {K.-I.},

   title={“{Domain Decomposition method on GPU cluster}”},

   journal={ArXiv e-prints},

   archivePrefix={“arXiv”},

   eprint={1011.3318},

   primaryClass={“hep-lat”},

   keywords={High Energy Physics – Lattice, Computer Science – Mathematical Software},

   year={2010},

   month={nov},

   adsurl={http://adsabs.harvard.edu/abs/2010arXiv1011.3318O},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1851

views

Pallalel GPGPU computing for lattice QCD simulations has a bottleneck on the GPU to GPU data communication due to the lack of the direct data exchanging facility. In this work we investigate the performance of quark solver using the restricted additive Schwarz (RAS) preconditioner on a low cost GPU cluster. We expect that the RAS preconditioner with appropriate domaindecomposition and task distribution reduces the communication bottleneck. The GPU cluster we constructed is composed of four PC boxes, two GPU cards are attached to each box, and we have eight GPU cards in total. The compute nodes are connected with rather slow but low cost Gigabit-Ethernet. We include the RAS preconditioner in the single-precision part of the mixedprecision nested-BiCGStab algorithm and the single-precision task is distributed to the multiple GPUs. The benchmarking is done with the O(a)-improved Wilson quark on a randomly generated gauge configuration with the size of $32^4$. We observe a factor two improvment on the solver performance with the RAS precoditioner compared to that without the preconditioner and find that the improvment mainly comes from the reduction of the communication bottleneck as we expected.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: