KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators
Extreme Computing Research Center, KAUST
arXiv:1410.1726 [cs.MS], (7 Oct 2014)
@article{2014arXiv1410.1726A,
author={Abdelfattah}, A. and {Keyes}, D. and {Ltaief}, H.},
title={"{KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators}"},
journal={ArXiv e-prints},
archivePrefix={"arXiv"},
eprint={1410.1726},
primaryClass={"cs.MS"},
keywords={Computer Science – Mathematical Software},
year={2014},
month={oct},
adsurl={http://adsabs.harvard.edu/abs/2014arXiv1410.1726A},
adsnote={Provided by the SAO/NASA Astrophysics Data System}
}
KBLAS is a new open source high performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS is able to efficiently run on various GPU architectures across different generations, avoiding the time-consuming step of code rewriting, while still being compliant with the standard BLAS API. Another advanced optimization technique allows to ensure coalesced memory access when dealing with submatrices, especially in the context of high level dense linear algebra algorithms. All four precisions KBLAS kernels have been leveraged to multi-GPUs environment, which requires the introduction of new APIs to ease users’ experiences on these challenging systems. The KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes, achieves asymptotically up to 50% and 60% speedup on single GPU and multi-GPUs systems, respectively, and validates our performance model. A subset of KBLAS high performance kernels has been integrated into NVIDIA’s standard BLAS implementation (cuBLAS) for larger dissemination, starting version 6.0.
October 8, 2014 by hgpu