1665

Triangular matrix inversion on Graphics Processing Unit

Florian Ries, Tommaso De Marco, Matteo Zivieri, Roberto Guerrieri
University of Bologna
In SC ’09: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis (2009), pp. 1-10

@inproceedings{Ries:2009:TMI:1654059.1654069,

   author={Ries, Florian and De Marco, Tommaso and Zivieri, Matteo and Guerrieri, Roberto},

   title={Triangular matrix inversion on Graphics Processing Unit},

   booktitle={Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis},

   series={SC ’09},

   year={2009},

   isbn={978-1-60558-744-8},

   location={Portland, Oregon},

   pages={9:1–9:10},

   articleno={9},

   numpages={10},

   url={http://doi.acm.org/10.1145/1654059.1654069},

   doi={http://doi.acm.org/10.1145/1654059.1654069},

   acmid={1654069},

   publisher={ACM},

   address={New York, NY, USA},

   keywords={CUDA, GPGPU, dense matrix inversion, trianguar matrix}

}

Source Source   

1999

views

Dense matrix inversion is a basic procedure in many linear algebra algorithms. A computationally arduous step in most dense matrix inversion methods is the inversion of triangular matrices as produced by factorization methods such as LU decomposition. In this paper, we demonstrate how triangular matrix inversion (TMI) can be accelerated considerably by using commercial Graphics Processing Units (GPU) in a standard PC. Our implementation is based on a divide and conquer type recursive TMI algorithm, efficiently adapted to the GPU architecture. Our implementation obtains a speedup of 34x versus a CPU-based LAPACK reference routine, and runs at up to 54 gigaflops/s on a GTX 280 in double precision. Limitations of the algorithm are discussed, and strategies to cope with them are introduced. In addition, we show how inversion of an L- and U-matrix can be performed concurrently on a GTX 295 based dual-GPU system at up to 90 gigaflops/s.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: