Automatic Performance Optimization in ViennaCL for GPUs
CD Laboratory for Reliability, IuE, TU Wien, Wien
Proceedings of the 9th Workshop on Parallel/High-Performance Object-Oriented Scientific Computing (POOSC ’10), 2010
@inproceedings{rupp2010automatic,
title={Automatic performance optimization in ViennaCL for GPUs},
author={Rupp, K. and Weinbub, J. and Rudolf, F.},
booktitle={Proceedings of the 9th Workshop on Parallel/High-Performance Object-Oriented Scientific Computing},
pages={6},
year={2010},
organization={ACM}
}
Highly parallel computing architectures such as graphics processing units (GPUs) pose several new challenges for scientific computing, which have been absent on single core CPUs. However, a transition from existing serial code to parallel code for GPUs often requires a considerable amount of effort. The Vienna Computing Library (ViennaCL) presented in the beginning of this work is based on OpenCL to support a wide range of hardware and aims at providing a high-level C++ interface that is mostly compatible with the existing CPU linear algebra library uBLAS shipped with the Boost libraries. As a general purpose linear algebra library, ViennaCL runs on a variety of GPU boards from different vendors pursuing different hardware architectures. As a consequence, the optimal number of threads working on a problem in parallel depends on the available hardware and the algorithm executed thereon. We present an optimization framework, which extracts suitable thread numbers and allows ViennaCL to automatically optimize itself to the underlying hardware. The performance enhancement of individually tuned kernels over default parameter choices range up to 25 percent for the kernels considered on high-end hardware, and up to a factor of seven on low-end hardware.
February 10, 2012 by hgpu