7694

Pipelining the Fast Multipole Method over a Runtime System

Emmanuel Agullo, Beranger Bramas, Olivier Coulaud, Eric Darve, Matthias Messner, Takahashi Toru
INRIA Bordeaux – Sud-Ouest, LaBRI
arXiv:1206.0115v1 [cs.DC] (1 Jun 2012)

@article{2012arXiv1206.0115A,

   author={Agullo}, E. and {Bramas}, B. and {Coulaud}, O. and {Darve}, E. and {Messner}, M. and {Toru}, T.},

   title={"{Pipelining the Fast Multipole Method over a Runtime System}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1206.0115},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2012},

   month={jun},

   adsurl={http://adsabs.harvard.edu/abs/2012arXiv1206.0115A},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1597

views

Fast Multipole Methods (FMM) are a fundamental operation for the simulation of many physical problems. The high performance design of such methods usually requires to carefully tune the algorithm for both the targeted physics and the hardware. In this paper, we propose a new approach that achieves high performance across architectures. Our method consists of expressing the FMM algorithm as a task flow and employing a state-of-the-art runtime system, StarPU, in order to process the tasks on the different processing units. We carefully design the task flow, the mathematical operators, their Central Processing Unit (CPU) and Graphics Processing Unit (GPU) implementations, as well as scheduling schemes. We compute potentials and forces of 200 million particles in 48.7 seconds on a homogeneous 160 cores SGI Altix UV 100 and of 38 million particles in 13.34 seconds on a heterogeneous 12 cores Intel Nehalem processor enhanced with 3 Nvidia M2090 Fermi GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: