12840

High-performance Implementations and Large-scale Validation of the Link-wise Artificial Compressibility Method

Christian Obrecht, Pietro Asinari, Frederic Kuznik, Jean-Jacques Roux
CETHIL UMR 5008 (CNRS, INSA-Lyon, UCB-Lyon 1), Universite de Lyon, France
hal-01059217, (9 September 2014)

@article{obrecht:hal-01059217,

   hal_id={hal-01059217},

   url={http://hal.archives-ouvertes.fr/hal-01059217},

   title={High-performance implementations and large-scale validation of the link-wise artificial compressibility method},

   author={Obrecht, Christian and Asinari, Pietro and Kuznik, Fr{‘e}d{‘e}ric and Roux, Jean-Jacques},

   language={Anglais},

   affiliation={Centre de Thermique de Lyon – CETHIL , Multi-scale Modeling Lab – SMaLL},

   pages={143-153},

   journal={Journal of Computational Physics},

   volume={275},

   number={15},

   audience={internationale},

   year={2014},

   month={Oct},

   pdf={http://hal.archives-ouvertes.fr/hal-01059217/PDF/obrecht14a.pdf}

}

Download Download (PDF)   View View   Source Source   

1840

views

The link-wise artificial compressibility method (LW-ACM) is a recent formulation of the artificial compressibility method for solving the incompressible Navier-Stokes equations. Two implementations of the LW-ACM in three dimensions on CUDA enabled GPUs are described. The first one is a modified version of a state-of-the-art CUDA implementation of the lattice Boltzmann method (LBM), showing that an existing GPU LBM solver might easily be adapted to LW-ACM. The second one follows a novel approach, which leads to a performance increase of up to 1.8x compared to the LBM implementation considered here, while reducing the memory requirements by a factor of 5.25. Large-scale simulations of the lid-driven cubic cavity at Reynolds number Re = 2000 were performed for both LW-ACM and LBM. Comparison of the simulation results against spectral elements reference data shows that LW-ACM performs almost as well as multiple-relaxation-time LBM in terms of accuracy.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: