30168

Harnessing Batched BLAS/LAPACK Kernels on GPUs for Parallel Solutions of Block Tridiagonal Systems

David Jin, Alexis Montoison, Sungho Shin
Massachusetts Institute of Technology
arXiv:2509.03015 [cs.MS], (3 Sep 2025)

@misc{jin2025harnessingbatchedblaslapackkernels,

   title={Harnessing Batched BLAS/LAPACK Kernels on GPUs for Parallel Solutions of Block Tridiagonal Systems},

   author={David Jin and Alexis Montoison and Sungho Shin},

   year={2025},

   eprint={2509.03015},

   archivePrefix={arXiv},

   primaryClass={cs.MS},

   url={https://arxiv.org/abs/2509.03015}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

312

views

We present a GPU implementation for the factorization and solution of block-tridiagonal symmetric positive definite linear systems, which commonly arise in time-dependent estimation and optimal control problems. Our method employs a recursive algorithm based on Schur complement reduction, transforming the system into a hierarchy of smaller, independent blocks that can be efficiently solved in parallel using batched BLAS/LAPACK routines. While batched routines have been used in sparse solvers, our approach applies these kernels in a tailored way by exploiting the block-tridiagonal structure known in advance. Performance benchmarks based on our open-source, cross-platform implementation, TBD-GPU, demonstrate the advantages of this tailored utilization: achieving substantial speed-ups compared to state-of-the-art CPU direct solvers, including CHOLMOD and HSL MA57, while remaining competitive with NVIDIA cuDSS. However, the current implementation still performs sequential calls of batched routines at each recursion level, and the block size must be sufficiently large to adequately amortize kernel launch overhead.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: