Using SIMD and SIMT vectorization to evaluate sparse chemical kinetic Jacobian matrices and thermochemical source terms
Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
arXiv:1809.01029 [physics.comp-ph], (4 Sep 2018)
@article{curtis2018using,
title={Using SIMD and SIMT vectorization to evaluate sparse chemical kinetic Jacobian matrices and thermochemical source terms},
author={Curtis, Nicholas J. and Niemeyer, Kyle E. and Sung, Chih-Jen},
year={2018},
month={sep},
archivePrefix={"arXiv"},
primaryClass={physics.comp-ph}
}
Accurately predicting key combustion phenomena in reactive-flow simulations, e.g., lean blow-out, extinction/ignition limits and pollutant formation, necessitates the use of detailed chemical kinetics. The large size and high levels of numerical stiffness typically present in chemical kinetic models relevant to transportation/power-generation applications make the efficient evaluation/factorization of the chemical kinetic Jacobian and thermochemical source-terms critical to the performance of reactive-flow codes. Here we investigate the performance of vectorized evaluation of constant-pressure/volume thermochemical source-term and sparse/dense chemical kinetic Jacobians using single-instruction, multiple-data (SIMD) and single-instruction, multiple thread (SIMT) paradigms. These are implemented in pyJac, an open-source, reproducible code generation platform. A new formulation of the chemical kinetic governing equations was derived and verified, resulting in Jacobian sparsities of 28.6-92.0% for the tested models. Speedups of 3.40-4.08x were found for shallow-vectorized OpenCL source-rate evaluation compared with a parallel OpenMP code on an avx2 central processing unit (CPU), increasing to 6.63-9.44x and 3.03-4.23x for sparse and dense chemical kinetic Jacobian evaluation, respectively. Furthermore, the effect of data-ordering was investigated and a storage pattern specifically formulated for vectorized evaluation was proposed; as well, the effect of the constant pressure/volume assumptions and varying vector widths were studied on source-term evaluation performance. Speedups reached up to 17.60x and 45.13x for dense and sparse evaluation on the GPU, and up to 55.11x and 245.63x on the CPU over a first-order finite-difference Jacobian approach. Further, dense Jacobian evaluation was up to 19.56x and 2.84x times faster than a previous version of pyJac on a CPU and GPU, respectively.
September 9, 2018 by hgpu