9341

Optimizing CUDA Code By Kernel Fusion – Application on BLAS

Jiri Filipovic, Matus Madzin, Jan Fousek, Ludek Matyska
Institute of Computer Science, Masaryk University, Botanicka 68a, 602 00 Brno, Czech Republic
arXiv:1305.1183 [cs.DC], (6 May 2013)

@article{2013arXiv1305.1183F,

   author={Filipovi{v c}}, J. and {Madzin}, M. and {Fousek}, J. and {Matyska}, L.},

   title={"{Optimizing CUDA Code By Kernel Fusion—Application on BLAS}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1305.1183},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2013},

   month={may},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1305.1183F},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1875

views

Modern GPUs are able to perform significantly more arithmetic operations than transfers of a single word to or from global memory. Hence, many GPU kernels are limited by memory bandwidth and cannot exploit the arithmetic power of GPUs. However, the memory locality can be often improved by kernel fusion when a sequence of kernels is executed and some kernels in this sequence share data. In this paper, we show how kernels performing map, reduce or their nested combinations can be fused automatically by our source-to-source compiler. To demonstrate the usability of the compiler, we have implemented several BLAS-1 and BLAS-2 routines and show how the performance of their sequences can be improved by fusions. Compared to similar sequences using CUBLAS, our compiler is able to generate code that is up to 2.61x faster for the examples tested.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: