Auto-tuning Dense Vector and Matrix-Vector Operations for Fermi GPUs
Informatics and Mathematical Modelling, Technical University of Denmark, Bldg. 321, DK-2800 Lyngby, Denmark
Lecture Notes in Computer Science (LNCS) 7203, Springer, pp. 619, 2012
@misc{sorensen2011auto,
title={Auto-tuning Dense Vector and Matrix-Vector Operations for Fermi GPUs},
author={S{o}rensen, H.H.B.},
year={2011}
}
In this paper, we consider the automatic performance tuning of dense vector and matrix-vector operations on GPUs. Such operations form the backbone of level 1 and level 2 routines in the Basic Linear Algebra Subroutines (BLAS) library and are therefore of great importance in many scientific applications. As examples, we develop single-precision CUDA kernels for the euclidian norm (SNRM2) and the matrix-vector multiplication (SGEMV). The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture). We show that auto-tuning can be successfully applied to achieve high performance for dense vector and matrix-vector operations by appropriately utilizing the fine-grained parallelism of the GPU. Our tuned kernels display between 25-100% better performance than the current CUBLAS v.3.2 library.
April 18, 2012 by hgpu