6313

Auto-tunable GPU BLAS

Jarle Erdal Steinsland
Department of Computer and Information Science, Norwegian University of Science and Technology
Institutt for datateknikk og informasjonsvitenskap, 2011

@article{steinsland2011auto,

   title={Auto-tunable GPU BLAS},

   author={Steinsland, J.E.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

2781

views

OpenCL is fast becoming the preferred framework used to make programs for heterogeneous platforms consisting of at least one CPU and one or more accelerators. The GPU being readily available in almost all computers, it is the most common accelerator in use.Good libraries are important to reduce development time and to make particular development environments, such as OpenCL, useful for the masses. All OpenCL programs can execute on any device that have support for it, however to achieve optimal performance, a OpenCL program must be optimized for a specific device.Auto-tuning is a strategy to automatically generate and find a good performing program for a specific device, without requiring the user to perform optimizations manually.BLAS contains routines that are useful for many algorithms suited for GPUs, and is a good candidate for a library that can prove useful for many OpenCL programmers.We have chosen, in this thesis, to implement the matrix multiplication routine from BLAS as it is important for the performance of many higher-level linear algebra algorithms to have a fast implementation of matrix multiplication. The exact operation we have implemented is $C = alpha * A * B + beta * C$, were A, B and C are MxK, KxN and MxN matrices respectively.In this thesis, we implement an auto-tuning framework that generates source code for OpenCL kernels and find the best one for the device it is being executed on.We compare our version with ViennaCL, a OpenCL BLAS library, and the vendor provided BLAS libraries provided by AMD and NVIDIA. Our version provides approximately 85% of the performance of the vendor specific library provided by NVIDIA, in general, and gives a speedup over the native library provided by AMD. This speedup is usually between 1.5 and 2. On both platforms our version outperforms ViennaCL by a large margin.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: