12453

GiMMiK – Generating Bespoke Matrix Multiplication Kernels for Various Hardware Accelerators; Applications in High-Order Computational Fluid Dynamics

Bartosz D. Wozniak
Imperial College London, Department of Computing
Imperial College London, 2014

@article{wozniak2014gimmik,

   title={GiMMiK-Generating Bespoke Matrix Multiplication Kernels for Various Hardware Accelerators; Applications in High-Order Computational Fluid Dynamics},

   author={Wozniak, Bartosz D and Kelly, Paul HJ and Vincent, Peter E},

   year={2014}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1281

views

Matrix multiplication is a fundamental linear algebra routine ubiquitous in all areas of science and engineering. Highly optimised BLAS libraries (cuBLAS and clBLAS on GPUs) are the most popular choices for an implementation of the General Matrix Multiply (GEMM) in software. However, performance of library GEMM is poor for small matrix sizes. In this thesis we consider a block-by-panel type of matrix multiplication, where the block matrix is typically small (e.g. dimensions of 96 x 64), motivated by an application in PyFR – the most recent implementation of Flux Reconstruction schemes for high-order fluid ow simulations on unstructured meshes. We show how prior knowledge of the operator matrix can be exploited to generate highly performant kernel code, which outperforms the cuBLAS and clBLAS GEMM implementations. We present GiMMiK – a generator of bespoke matrix multiplication kernels for the CUDA and OpenCL platforms. GiMMiK generates code by fully unrolling the matrix-vector product. The generated kernels embed values of the operator matrix directly in the code to benefit from the use of the constant cache and compiler optimisations. Further, we reduce the number of floating-point operations by removing multiplications by zeros. We are able to achieve speedups for individual PyFR matrices of up to 9:98 (12:20) times on the Tesla K40c and 63:30 (13:07) times on the GTX 780 Ti in double (single) precision. Using GiMMiK as the matrix multiplication kernel provider allows us to achieve a speedup of up to 1:72 (2:14) for an example simulation of an unsteady flow over a cylinder executed with PyFR in double (single) precision on the Tesla K40c.
Rating: 1.5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: