# high performance computing on graphics processing units: hgpu.org

## Orthogononalization on a general purpose graphics processing unit with double double and quad double arithmetic

Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago
arXiv:1210.0800 [cs.MS] (2 Oct 2012)

@article{2012arXiv1210.0800V,

journal={ArXiv e-prints},

archivePrefix={"arXiv"},

eprint={1210.0800},

primaryClass={"cs.MS"},

keywords={Mathematical Software; Distributed, Parallel, and Cluster Computing; Numerical Analysis},

year={2012},

month={sep}

}

1612

views

Our problem is to accurately solve linear systems of modest dimensions (typically, the number of variables equals 32) on a general purpose graphics processing unit. The linear systems originate from the application of Newton’s method on polynomial systems of (moderately) large degrees. Newton’s method is applied as a corrector in a path following method, so the linear systems are solved in sequence and not simultaneously. One solution path may require the solution of thousands of linear systems. In previous work we reported good speedups with our implementation to evaluate and differentiate polynomial systems on the NVIDIA Tesla C2050. Although the cost of evaluation and differentiation often dominates the cost of linear system solving, because of the limited bandwidth of the communication between CPU and GPU, we cannot afford to send the linear system to the CPU for solving. Because of large degrees, the Jacobian matrix may contain extreme values, requiring extended precision, leading to a significant overhead. This overhead of multiprecision arithmetic is an additional motivation to develop a massively parallel algorithm. To allow overdetermined linear systems we solve linear systems in the least squares sense, computing the QR decomposition of the matrix by the modified Gram-Schmidt algorithm. We describe our implementation of the modified Gram-Schmidt orthogonalization method for the NVIDIA Tesla C2050, using double double and quad double arithmetic. Our experimental results show that the achieved speedups are sufficiently high to compensate for the overhead of one extra level of precision.

* * *
* * *