Parallel algorithms to a parallel hardware: Designing vision algorithms for a GPU
Robotics Institute, Carnegie Mellon University
In 2009 IEEE International Conference on Computer Vision Workshops (September 2009), pp. 862-869
@conference{kim2010parallel,
title={Parallel algorithms to a parallel hardware: Designing vision algorithms for a GPU},
author={Kim, J.S. and Hwangbo, M. and Kanade, T.},
booktitle={Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on},
pages={862–869},
year={2010},
organization={IEEE}
}
A GPU becomes an affordable solution for accelerating a slow process on a commercial system. The most of achievements using it for non-rendering problems are the exact re-implementation of existing algorithms designed for a serial CPU. We study about conditions of a good parallel algorithm, and show that it is possible to design an algorithm targeted to a parallel hardware, though it may be useless on a CPU. The optical flow estimation problem is investigated to show the possibility. In some time-critical applications, it is more important to get results in a limited time than to improve the results. We focus on designing optical flow approximation algorithms tailored for a GPU to get a reasonable result as fast as possible by reformulating the problem as change detection with hypothesis generation using features tracked in advance. Two parallel algorithms are proposed: direct interpolation and testing multiple hypotheses. We discuss implementation issues in the CUDA framework. Both methods are running on a GPU in a near video rate providing reasonable results for the time-critical applications. These GPU-tailored algorithms become useful by running about 240 times faster than the equivalent serial implementations which are too slow to be useful in practice.
January 8, 2011 by hgpu
Your response
You must be logged in to post a comment.




