Optimizing and Auto-tuning Belief Propagation on the GPU
Computer and Information Sciences, University of Delaware
Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, Volume 6548/2011, 121-135, 2011
@article{grauer2011optimizing,
title={Optimizing and Auto-tuning Belief Propagation on the GPU},
author={Grauer-Gray, S. and Cavazos, J.},
journal={Languages and Compilers for Parallel Computing},
pages={121–135},
year={2011},
publisher={Springer}
}
A CUDA kernel will utilize high-latency local memory for storage when there are not enough registers to hold the required data or if the data is an array that is accessed using a variable index within a loop. However, accesses from local memory take longer than accesses from registers and shared memory, so it is desirable to minimize the use of local memory. This paper contains an analysis of strategies used to reduce the use of local memory in a CUDA implementation of belief propagation for stereo processing. We perform experiments using registers as well as shared memory as alternate locations for data initially placed in local memory, and then develop a hybrid implementation that allows the programmer to store an adjustable amount of data in shared, register, and local memory. We show results of running our optimized implementations on two different stereo sets and across three generations of nVidia GPUs, and introduce an auto-tuning implementation that generates an optimized belief propagation implementation on any input stereo set on any CUDA-capable GPU.
October 30, 2011 by hgpu