A Predictive Model for Solving Small Linear Algebra Problems in GPU Registers

Michael J. Anderson, David Sheffield, Kurt Keutzer
UC Berkeley: Department of Electrical Engineering and Computer Sciences, Berkeley, CA USA
International Parallel and Distributed Processing Symposium (IPDPS), 2012


   title={A Predictive Model for Solving Small Linear Algebra Problems in GPU Registers},

   author={Anderson, M.J. and Sheffield, D. and Keutzer, K.},



We examine the problem of solving many thousands of small dense linear algebra factorizations simultaneously on Graphics Processing Units (GPUs). We are interested in problems ranging from several hundred of rows and columns to 4×4 matrices. Problems of this size are common, especially in signal processing. However, they have received very little attention from current numerical linear algebra libraries for GPUs, which have thus far focused only on very large problems found in traditional supercomputing applications and benchmarks. To solve small problems efficiently we tailor our implementation to the GPUs inverted memory hierarchy and multi-level parallelism hierarchy. We provide a model of the GPU memory subsystem that can accurately predict and explain the performance of our approach across different problem sizes. As a motivating example, we look at space-time adaptive radar processing, a real-time application that requires hundreds of independent QR factorizations of small complex matrices (e.g. 240×66). For realistic matrix sizes from a standard radar processing benchmark, our implementation on an NVIDIA Quadro 6000 GPU runs 2.8x to 25x faster than Intel’s Math Kernel Library (MKL) on an Intel Core i7-2600. For the QR factorizations of 5,000 56×56 single-precision matrices, our approach runs 29x faster than MKL and 140x faster than the state-of-the-art linear algebra library for GPUs. In each of these cases we are using the GPU’s hardwareaccelerated division and square root functions that are accurate up to 22 mantissa bits.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: