Semantic Pose using Deep Networks Trained on Synthetic RGB-D
Bernstein Center for Computational Neuroscience (BCCN), III. Physikalisches Institut – Biophysik, Georg-August University of Gottingen
arXiv:1508.00835 [cs.CV], (4 Aug 2015)
@article{papon2015semantic,
title={Semantic Pose using Deep Networks Trained on Synthetic RGB-D},
author={Papon, Jeremie and Schoeler, Markus},
year={2015},
month={aug},
archivePrefix={"arXiv"},
primaryClass={cs.CV}
}
In this work we address the problem of indoor scene understanding from RGB-D images. Specifically, we propose to find instances of common furniture classes, their spatial extent, and their pose with respect to generalized class models. To accomplish this, we use a deep, wide, multi-output convolutional neural network (CNN) that predicts class, pose, and location of possible objects simultaneously. To overcome the lack of large annotated RGB-D training sets (especially those with pose), we use an on-the-fly rendering pipeline that generates realistic cluttered room scenes in parallel to training. We then perform transfer learning on the relatively small amount of publicly available annotated RGB-D data, and find that our model is able to successfully annotate even highly challenging real scenes. Importantly, our trained network is able to understand noisy and sparse observations of highly cluttered scenes with a remarkable degree of accuracy, inferring class and pose from a very limited set of cues. Additionally, our neural network is only moderately deep and computes class, pose and position in tandem, so the overall run-time is significantly faster than existing methods, estimating all output parameters simultaneously in parallel on a GPU in seconds.
August 5, 2015 by hgpu