Autotuning CUDA Compiler Parameters for Heterogeneous Applications using the OpenTuner Framework
Instituto de Matematica e Estatistica (IME), Universidade de Sao Paulo (USP), R. do Matao, 1010 – Cidade, Universitaria, Sao Paulo – SP, 05508-090
Concurrency and Computation Practice and Experience, 2017
@article{bruel2017autotuning,
title={Autotuning CUDA Compiler Parameters for Heterogeneous Applications using the OpenTuner Framework},
author={Bruel, Pedro and Amar{i}s, Marcos and Goldman, Alfredo},
year={2017}
}
A Graphics Processing Unit (GPU) is a parallel computing coprocessor specialized in accelerating vector operations. The enormous heterogeneity of parallel computing platforms justifies and motivates the development of automated optimization tools and techniques. The Algorithm Selection Problem consists in finding a combination of algorithms, or a configuration of an algorithm, that optimizes the solution of a set of problem instances. An autotuner solves the Algorithm Selection Problem using search and optimization techniques. In this paper we implement an autotuner for the CUDA compiler’s parameters using the OpenTuner framework. The autotuner searches for a set of compilation parameters that optimizes the time to solve a problem. We analyse the performance speedups, in comparison with high-level compiler optimizations, achieved in three different GPU devices, for 17 heterogeneous GPU applications, 12 of which are from the Rodinia Benchmark Suite. The autotuner often beat the compiler’s high-level optimizations, but underperformed for some problems. We achieved over 2x speedup for Gaussian Elimination and almost 2x speedup for Heart Wall, both problems from the Rodinia Benchmark, and over 4x speedup for a matrix multiplication algorithm.
November 16, 2016 by hgpu