2797

Scaling LAPACK panel operations using parallel cache assignment

Anthony M. Castaldo, R. Clint Whaley
Department of Computer Science, University of Texas at San Antonio, San Antonio, TX 78249
Proceedings of the 15th ACM SIGPLAN symposium on Principles and practice of parallel programming, PPoPP ’10

@conference{castaldo2010scaling,

   title={Scaling LAPACK panel operations using parallel cache assignment},

   author={Castaldo, A.M. and Whaley, R.C.},

   booktitle={Proceedings of the 15th ACM SIGPLAN symposium on Principles and practice of parallel computing},

   pages={223–232},

   year={2010},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

2317

views

In LAPACK many matrix operations are cast as block algorithms which iteratively process a panel using an unblocked algorithm and then update a remainder matrix using the high performance Level 3 BLAS. The Level 3 BLAS have excellent weak scaling, but panel processing tends to be bus bound, and thus scales with bus speed rather than the number of processors (p). Amdahl’s law therefore ensures that as p grows, the panel computation will become the dominant cost of these LAPACK routines. Our contribution is a novel parallel cache assignment approach which we show scales well with p. We apply this general approach to the QR and LU panel factorizations on two commodity 8-core platforms with very different cache structures, and demonstrate superlinear panel factorization speedups on both machines. Other approaches to this problem demand complicated reformulations of the computational approach, new kernels to be tuned, new mathematics, an inflation of the high-order flop count, and do not perform as well. By demonstrating a straight-forward alternative that avoids all of these contortions and scales with p, we address a critical stumbling block for dense linear algebra in the age of massive parallelism.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: