22577

PERCH 2.0: Fast and Accurate GPU-based Perception via Search for Object Pose Estimation

Aditya Agarwal, Yupeng Han, Maxim Likhachev
The Robotics Institute, Carnegie Mellon University, PA, USA
arXiv:2008.00326 [cs.CV], (1 Aug 2020)

@misc{agarwal2020perch,

   title={PERCH 2.0 : Fast and Accurate GPU-based Perception via Search for Object Pose Estimation},

   author={Aditya Agarwal and Yupeng Han and Maxim Likhachev},

   year={2020},

   eprint={2008.00326},

   archivePrefix={arXiv},

   primaryClass={cs.CV}

}

Pose estimation of known objects is fundamental to tasks such as robotic grasping and manipulation. The need for reliable grasping imposes stringent accuracy requirements on pose estimation in cluttered, occluded scenes in dynamic environments. Modern methods employ large sets of training data to learn features in order to find correspondence between 3D models and observed data. However these methods require extensive annotation of ground truth poses. An alternative is to use algorithms that search for the best explanation of the observed scene in a space of possible rendered scenes. A recently developed algorithm, PERCH (PErception Via SeaRCH) does so by using depth data to converge to a globally optimum solution using a search over a specially constructed tree. While PERCH offers strong guarantees on accuracy, the current formulation suffers from low scalability owing to its high runtime. In addition, the sole reliance on depth data for pose estimation restricts the algorithm to scenes where no two objects have the same shape. In this work, we propose PERCH 2.0, a novel perception via search strategy that takes advantage of GPU acceleration and RGB data. We show that our approach can achieve a speedup of 100x over PERCH, as well as better accuracy than the state-of-the-art data-driven approaches on 6-DoF pose estimation without the need for annotating ground truth poses in the training data. Our code and video are available.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2020 hgpu.org

All rights belong to the respective authors

Contact us: