Cue-independent extending inverse kinematics for robust pose estimation in 3D point clouds
Institute for Human Machine Communication, Technische Universitat Munchen, Arcisstr. 21, 80634 Munchen, Germany
17th IEEE International Conference on Image Processing (ICIP), 2010
@inproceedings{lehment2010cue,
title={Cue-independent extending inverse kinematics for robust pose estimation in 3D point clouds},
author={Lehment, N.H. and Kaiser, M. and Arsic, D. and Rigoll, G.},
booktitle={Image Processing (ICIP), 2010 17th IEEE International Conference on},
pages={2465–2468},
year={2010},
organization={IEEE}
}
While monocular gesture recognition slowly reaches maturity, the inclusion of 3D gestures remains a challenge. In order to enable robust and versatile depth-enabled gestures, a depth-image based tracking approach is developed. Using a model-based annealing particle filter approach, the pose of a single subject is retrieved and tracked over longer image and motion sequences. Other than many previous depth-image based systems, full body tracking is performed. The system is independent from specific camera types and is independent from color or texture cues. Pose space exploration in complex kinematic chains is enhanced by considering extending inverse kinematics. Exploiting the highly parallel nature of the 3D point based approach, the algorithm is partially implemented on a GPU, leading to near real time performance.
July 20, 2011 by hgpu