2811

The fast multipole method on parallel clusters, multicore processors, and graphics processing units

Eric Darve, Cris Cecka, Toru Takahashi
Mechanical Engineering Department, Institute for Computational and Mathematical Engineering, Stanford University, Durand 209, 496 Lomita Mall, 94305-3030 Stanford, CA, USA
Comptes Rendus Mecanique, Volume 339, Issues 2-3, February-March 2011, Pages 185-193

@article{Darve2011185,

   title={“Thefastmultipolemethodonparallelclusters},

   journal={“ComptesRendusMecanique”},

   volume={“339”},

   number={“2-3”},

   pages={“185-193”},

   year={“2011”},

   note={“HighPerformanceComputing”},

   issn={“1631-0721”},

   doi={“DOI:10.1016/j.crme.2010.12.005”},

   url={“http://www.sciencedirect.com/science/article/B6X1C-51XFXSJ-1/2/f27467228c361822e5d4a43abc270744”},

   author={“EricDarveandCrisCeckaandToruTakahashi”},

   keywords={“Calculateurparallele”}

}

Download Download (PDF)   View View   Source Source   

522

views

In this article, we discuss how the fast multipole method (FMM) can be implemented on modern parallel computers, ranging from computer clusters to multicore processors and graphics cards (GPU). The FMM is a somewhat difficult application for parallel computing because of its tree structure and the fact that it requires many complex operations which are not regularly structured. Computational linear algebra with dense matrices for example allows many optimizations that leverage the regular computation pattern. FMM can be similarly optimized but we will see that the complexity of the optimization steps is greater. The discussion will start with a general presentation of FMMs. We briefly discuss parallel methods for the FMM, such as building the FMM tree in parallel, and reducing communication during the FMM procedure. Finally, we will focus on porting and optimizing the FMM on GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: