16456

Accelerating finite-rate chemical kinetics with coprocessors: comparing vectorization methods on GPUs, MICs, and CPUs

Christopher P. Stone, Kyle E. Niemeyer
Computational Science and Engineering, LLC Chicago, IL 60622, USA
arXiv:1608.05794 [physics.comp-ph], (20 Aug 2016)

@article{stone2016accelerating,

   title={Accelerating finite-rate chemical kinetics with coprocessors: comparing vectorization methods on GPUs, MICs, and CPUs},

   author={Stone, Christopher P. and Niemeyer, Kyle E.},

   year={2016},

   month={aug},

   archivePrefix={"arXiv"},

   primaryClass={physics.comp-ph}

}

Download Download (PDF)   View View   Source Source   

1572

views

Efficient ordinary differential equation solvers for chemical kinetics must take into account the available thread and instruction-level parallelism of the underlying hardware, especially on many-core coprocessors, as well as the numerical efficiency. A stiff Rosenbrock and nonstiff Runge-Kutta solver are implemented using the single instruction, multiple thread (SIMT) and single instruction, multiple data (SIMD) paradigms with OpenCL. The performances of these parallel implementations were measured with three chemical kinetic models across several multicore and many-core platforms. Two runtime benchmarks were conducted to clearly determine any performance advantage offered by either method: evaluating the right-hand-side source terms in parallel, and integrating a series of constant-pressure homogeneous reactors using the Rosenbrock and Runge-Kutta solvers. The right-hand-side evaluations with SIMD parallelism on the host multicore Xeon CPU and many-core Xeon Phi co-processor performed approximately three times faster than the baseline multithreaded code. The SIMT model on the host and Phi was 13-35% slower than the baseline while the SIMT model on the GPU provided approximately the same performance as the SIMD model on the Phi. The runtimes for both ODE solvers decreased 2.5-2.7x with the SIMD implementations on the host CPU and 4.7-4.9x with the Xeon Phi coprocessor compared to the baseline parallel code. The SIMT implementations on the GPU ran 1.4-1.6 times faster than the baseline multithreaded CPU code; however, this was significantly slower than the SIMD versions on the host CPU or the Xeon Phi. The performance difference between the three platforms was attributed to thread divergence caused by the adaptive step-sizes within the ODE integrators. Analysis showed that the wider vector width of the GPU incurs a higher level of divergence than the narrower Sandy Bridge or Xeon Phi.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: