20105

Fireiron: A Scheduling Language for High-Performance Linear Algebra on GPUs

Bastian Hagedorn, Archibald Samuel Elliott, Henrik Barthels, Rastislav Bodik, Vinod Grover
University of Münster
arXiv:2003.06324 [cs.PL], (13 Mar 2020)

@misc{hagedorn2020fireiron,

   title={Fireiron: A Scheduling Language for High-Performance Linear Algebra on GPUs},

   author={Bastian Hagedorn and Archibald Samuel Elliott and Henrik Barthels and Rastislav Bodik and Vinod Grover},

   year={2020},

   eprint={2003.06324},

   archivePrefix={arXiv},

   primaryClass={cs.PL}

}

Download Download (PDF)   View View   Source Source   

1583

views

Achieving high-performance GPU kernels requires optimizing algorithm implementations to the targeted GPU architecture. It is of utmost importance to fully use the compute and memory hierarchy, as well as available specialised hardware. Currently, vendor libraries like cuBLAS and cuDNN provide the best performing implementations of GPU algorithms. However the task of the library programmer is incredibly challenging: for each provided algorithm, high-performance implementations have to be developed for all commonly used architectures, input sizes, and different storage formats. These implementations are generally provided as optimized assembly code because performance-critical architectural features are only exposed at this level. This prevents reuse between different implementations of even the same algorithm, as simple differences can have major effects on low-level implementation details. In this paper we introduce Fireiron, a DSL and compiler which allows the specification of high-performance GPU implementations as compositions of simple and reusable building blocks. We show how to use Fireiron to optimize matrix multiplication implementations, achieving performance matching hand-coded CUDA kernels, even when using specialised hardware such as NIVIDA Tensor Cores, and outperforming state-of-the-art implementations provided by cuBLAS by more than 2x.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: