12249

A Fast Batched Cholesky Factorization on a GPU

Tingxing Dong, Azzam Haidar, Stanimire Tomov, Jack Dongarra
Innovative Computing Laboratory, University of Tennessee, Knoxville, Knoxville, TN 37916
2014 International Conference on Parallel Processing (ICPP-2014), 2014

@inproceedings{icl:779,

   author={Dong, T. and Haidar, A. and Tomov, S. and Dongarra, J.},

   title={A Fast Batched Cholesky Factorization on a GPU},

   institution={Innovative Computing Laboratory, University of Tennessee},

   journal={2014 International Conference on Parallel Processing (ICPP-2014)},

   month={sep},

   year={2014}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

1964

views

Currently, state of the art libraries, like MAGMA, focus on very large linear algebra problems, while solving many small independent problems, which is usually referred to as batched problems, is not given adequate attention. In this paper, we proposed a batched Cholesky factorization on a GPU. Three algorithms – nonblocked, blocked, and recursive blocked – were examined. The left-looking version of the Cholesky factorization is used to factorize the panel, and the right-looking Cholesky version is used to update the trailing matrix in the recursive blocked algorithm. Our batched Cholesky achieves up to 1.8x speedup compared to the optimized parallel implementation in the MKL library on two sockets of Intel Sandy Bridge CPUs. Further, we use the new routines to develop a single Cholesky factorization solver which targets large matrix sizes. Our approach differs from MAGMA by having an entirely GPU implementation where both the panel factorization and the trailing matrix updates are on the GPU. Such an implementation does not depend on the speed of the CPU. Compared to the MAGMA library, our full GPU solution achieves 85% of the hybrid MAGMA performance which uses 16 Sandy Bridge cores, in addition to a K40 Nvidia GPU. Moreover, we achieve 80% of the practical dgemm peak of the machine, while MAGMA achieves only 75%, and finally, in terms of energy consumption, we outperform MAGMA by 1.5x in performance-per-watt for large matrices.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: