11809

Optimizing Krylov Subspace Solvers on Graphics Processing Units

Hartwig Anzt, Stanimire Tomov, Piotr Luszczek, Ichitaro Yamazaki, Jack Dongarra, William Sawyer
Innovative Computing Lab, University of Tennessee, Knoxville, USA
Innovative Computing Lab, University of Tennessee, Technical report ut-eecs-14-725, 2014

@article{anzt2014optimizing,

   title={Optimizing Krylov Subspace Solvers on Graphics Processing Units},

   author={Anzt, Hartwig and Tomov, Stanimire and Luszczek, Piotr and Yamazaki, Ichitaro and Dongarra, Jack and Sawyer, William},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1757

views

Krylov subspace solvers are often the method of choice when solving sparse linear systems iteratively. At the same time, hardware accelerators such as graphics processing units (GPUs) continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to leverage the full potential of the accelerator. In this paper we target the acceleration of the BiCGSTAB solver for GPUs, showing that significant improvement can be achieved by reformulating the method and developing application-specific kernels instead of using the generic CUBLAS library provided by NVIDIA. We propose an implementation that benefits from a significantly reduced number of kernel launches and GPUhost communication events, by means of increased data locality and a simultaneous reduction of multiple scalar products. Using experimental data, we show that, depending on the dominance of the untouched sparse matrix vector products, significant performance improvements can be achieved compared to a reference implementation based on the CUBLAS library. We feel that such optimizations are crucial for the subsequent development of highlevel sparse linear algebra libraries.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: