Input-Aware Auto-Tuning of Compute-Bound HPC Kernels
Harvard University
arXiv:1802.05371 [cs.DC], (15 Feb 2018)
@article{tillet2018inputaware,
title={Input-Aware Auto-Tuning of Compute-Bound HPC Kernels},
author={Tillet, Philippe and Cox, David},
year={2018},
month={feb},
archivePrefix={"arXiv"},
primaryClass={cs.DC}
}
Efficient implementations of HPC applications for parallel architectures generally rely on external software packages (e.g., BLAS, LAPACK, CUDNN). While these libraries provide highly optimized routines for certain characteristics of inputs (e.g., square matrices), they generally do not retain optimal performance across the wide range of problems encountered in practice. In this paper, we present an input-aware auto-tuning framework for matrix multiplications and convolutions, ISAAC, which uses predictive modeling techniques to drive highly parameterized PTX code templates towards not only hardware-, but also application-specific kernels. Numerical experiments on the NVIDIA Maxwell and Pascal architectures show up to 3x performance gains over both cuBLAS and cuDNN after only a few hours of auto-tuning.
February 17, 2018 by hgpu