3726

Realtime affine-photometric KLT feature tracker on GPU in CUDA framework

Jun-Sik Kim, Myung Hwangbo, Takeo Kanade
Robotics Institute, Carnegie Mellon University
IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), 2009

@conference{kim2009realtime,

   title={Realtime affine-photometric KLT feature tracker on GPU in CUDA framework},

   author={Kim, J.S. and Hwangbo, M. and Kanade, T.},

   booktitle={Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on},

   pages={886–893},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

2325

views

Feature tracking is one of fundamental steps in many computer vision algorithms and the KLT (Kanade-Lucas-Tomasi) method has been successfully used for optical flow estimation. There has been also much effort to implement KLT on GPUs to increase the speed with more features. Many implementations have chosen the translation model to describe a template motion because of its simplicity. However, a more complex model is demanded for appearance change especially in outdoor scenes or when camera undergoes roll motions. We implement the KLT tracker using an affine-photometric model on GPUs which has not been in a popular use due to its computational complexity. With careful attention to the parallel computing architecture of GPUs, up to 1024 feature points can be tracked simultaneously at a video rate under various 3D camera motions. Practical implementation issues will be discussed in the NVIDIA CUDA framework. We design different thread types and memory access patterns according to different computation requirements at each step of the KLT. We also suggest a CPU-GPU hybrid structure to overcome GPU limitations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: