4933

Articulated object tracking by rendering consistent appearance parts

Zachary Pezzementi, Sandrine Voros, Gregory D. Hager
Laboratory for Computational Science and Robotics (LCSR), Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
IEEE International Conference on Robotics and Automation, 2009. ICRA ’09

@inproceedings{pezzementi2009articulated,

   title={Articulated object tracking by rendering consistent appearance parts},

   author={Pezzementi, Z. and Voros, S. and Hager, G.D.},

   booktitle={Robotics and Automation, 2009. ICRA’09. IEEE International Conference on},

   pages={3940–3947},

   year={2009},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1938

views

We describe a general methodology for tracking 3-dimensional objects in monocular and stereo video that makes use of GPU-accelerated filtering and rendering in combination with machine learning techniques. The method operates on targets consisting of kinematic chains with known geometry. The tracked target is divided into one or more areas of consistent appearance. The appearance of each area is represented by a classifier trained to assign a class-conditional probability to image feature vectors. A search is then performed on the configuration space of the target to find the maximum likelihood configuration. In the search, candidate hypotheses are evaluated by rendering a 3D model of the target object and measuring its consistency with the class probability map. The method is demonstrated for tool tracking on videos from two surgical domains, as well as in a human hand-tracking task.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: