7123

Automatic Performance Optimization in ViennaCL for GPUs

Karl Rupp, Josef Weinbub, Florian Rudolf
CD Laboratory for Reliability, IuE, TU Wien, Wien
Proceedings of the 9th Workshop on Parallel/High-Performance Object-Oriented Scientific Computing (POOSC ’10), 2010

@inproceedings{rupp2010automatic,

   title={Automatic performance optimization in ViennaCL for GPUs},

   author={Rupp, K. and Weinbub, J. and Rudolf, F.},

   booktitle={Proceedings of the 9th Workshop on Parallel/High-Performance Object-Oriented Scientific Computing},

   pages={6},

   year={2010},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1750

views

Highly parallel computing architectures such as graphics processing units (GPUs) pose several new challenges for scientific computing, which have been absent on single core CPUs. However, a transition from existing serial code to parallel code for GPUs often requires a considerable amount of effort. The Vienna Computing Library (ViennaCL) presented in the beginning of this work is based on OpenCL to support a wide range of hardware and aims at providing a high-level C++ interface that is mostly compatible with the existing CPU linear algebra library uBLAS shipped with the Boost libraries. As a general purpose linear algebra library, ViennaCL runs on a variety of GPU boards from different vendors pursuing different hardware architectures. As a consequence, the optimal number of threads working on a problem in parallel depends on the available hardware and the algorithm executed thereon. We present an optimization framework, which extracts suitable thread numbers and allows ViennaCL to automatically optimize itself to the underlying hardware. The performance enhancement of individually tuned kernels over default parameter choices range up to 25 percent for the kernels considered on high-end hardware, and up to a factor of seven on low-end hardware.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: