Intel Xeon Phi acceleration of Hybrid Total FETI solver

Michal Merta, Lubomir Riha, Ondrej Meca, Alexandros Markopoulos, Tomas Brzobohaty, Tomas Kozubek, Vit Vondrak
IT4Innovations, VSB – Technical University of Ostrava, 17. listopadu 15/2172, 708 33 Ostrava-Poruba, Czech Republic
Advances in Engineering Software, 2017


   title={Intel Xeon Phi acceleration of Hybrid Total FETI solver},

   author={Merta, Michal and Riha, Lubomir and Meca, Ondrej and Markopoulos, Alexandros and Brzobohaty, Tomas and Kozubek, Tomas and Vondrak, Vit},

   journal={Advances in Engineering Software},




This paper describes an approach for acceleration of the Hybrid Total FETI (HTFETI) domain decomposition method using the Intel Xeon Phi coprocessors. The HTFETI method is a memory bound algorithm which uses sparse linear BLAS operations with irregular memory access pattern. The presented local Schur complement (LSC) method has regular memory access pattern, that allows the solver to fully utilize the Intel Xeon Phi fast memory bandwidth. This translates to speedup over 10.9 of the HTFETI iterative solver when solving 3 billion unknown heat transfer problem (3D Laplace equation) on almost 400 compute nodes. The comparison is between the CPU computation using sparse data structures (PARDISO sparse direct solver) and the LSC computation on Xeon Phi. In the case of the structural mechanics problem (3D linear elasticity) of size 1 billion DOFs the respective speedup is 3.4. The presented speedups are asymptotic and they are reached for problems requiring high number of iterations (e.g., ill-conditioned problems, transient problems, contact problems). For problems which can be solved with under hundred iterations the local Schur complement method is not optimal. For these cases we have implemented sparse matrix processing using PARDISO also for the Xeon Phi accelerators.
VN:F [1.9.22_1171]
Rating: 3.0/5 (2 votes cast)
Intel Xeon Phi acceleration of Hybrid Total FETI solver, 3.0 out of 5 based on 2 ratings

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: