Anatomy of High-Performance Many-Threaded Matrix Multiplication

Tyler M. Smith, Robert van de Geijn, Mikhail Smelyanskiy, Jeff R. Hammond, Field Van Zee
Institute for Computational Engineering and Sciences and Department of Computer Science, The University of Texas at Austin, Austin TX, 78712
28th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2014), 2014


   author={Tyler M. Smith and Robert A. {v}an~{d}e~{G}eijn and Mikhail Smelyanskiy,Jeff R. Hammond and Field G. {V}an~{Z}ee},

   title={Anatomy of High-Performance Many-Threaded Matrix Multiplication},

   booktitle={28th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2014)},




Download Download (PDF)   View View   Source Source   Source codes Source codes




BLIS is a new framework for rapid instantiation of the BLAS. We describe how BLIS extends the "GotoBLAS approach" to implementing matrix multiplication (GEMM). While GEMM was previously implemented as three loops around an inner kernel, BLIS exposes two additional loops within that inner kernel, casting the computation in terms of the BLIS microkernel so that porting GEMM becomes a matter of customizing this micro-kernel for a given architecture. We discuss how this facilitates a finer level of parallelism that greatly simplifies the multithreading of GEMM as well as additional opportunities for parallelizing multiple loops. Specifically, we show that with the advent of many-core architectures such as the IBM PowerPC A2 processor (used by Blue Gene/Q) and the Intel Xeon Phi processor, parallelizing both within and around the inner kernel, as the BLIS approach supports, is not only convenient, but also necessary for scalability. The resulting implementations deliver what we believe to be the best open source performance for these architectures, achieving both impressive performance and excellent scalability.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: