11566

A GPU Accelerated Aggregation Algebraic Multigrid Method

Rajesh Gandham, Ken Esler, Yongpeng Zhang
Department of Computational and Applied Mathematics, Rice University
arXiv:1403.1649 [math.NA], (7 Mar 2014)

@article{2014arXiv1403.1649G,

   author={Gandham}, R. and {Esler}, K. and {Zhang}, Y.},

   title={"{A GPU Accelerated Aggregation Algebraic Multigrid Method}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1403.1649},

   primaryClass={"math.NA"},

   keywords={Mathematics – Numerical Analysis, Computer Science – Numerical Analysis},

   year={2014},

   month={mar},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1403.1649G},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

702

views

We present an efficient, robust and fully GPU-accelerated aggregation-based algebraic multigrid preconditioning technique for the solution of large sparse linear systems. These linear systems arise from the discretization of elliptic PDEs. The method involves two stages, setup and solve. In the setup stage, hierarchical coarse grids are constructed through aggregation of the fine grid nodes. These aggregations are obtained using a set of maximal independent nodes from the fine grid nodes. We use a "fine-grain" parallel algorithm for finding a maximal independent set from a graph of strong negative connections. The aggregations are combined with a piece-wise constant (unsmooth) interpolation from the coarse grid solution to the fine grid solution, ensuring low setup and interpolation cost. The grid independent convergence is achieved by using recursive Krylov iterations (K-cycles) in the solve stage. An efficient combination of K-cycles and standard multigrid V-cycles is used as the preconditioner for Krylov iterative solvers such as generalized minimal residual and conjugate gradient. We compare the solver performance with other solvers based on smooth aggregation and classical algebraic multigrid methods.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: