13892

Convolutional Neural Network-Based Image Representation for Visual Loop Closure Detection

Yi Hou, Hong Zhang, Shilin Zhou
College of Electronic Science and Engineering, National University of Defense Technology, Changsha, Hunan, P. R. China
arXiv:1504.05241 [cs.RO], (20 Apr 2015)

@article{hou2015convolutional,

   title={Convolutional Neural Network-Based Image Representation for Visual Loop Closure Detection},

   author={Hou, Yi and Zhang, Hong and Zhou, Shilin},

   year={2015},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.RO}

}

Download Download (PDF)   View View   Source Source   

2265

views

Deep convolutional neural networks (CNN) have recently been shown in many computer vision and pattern recognition applications to outperform by a significant margin state-of-the-art solutions that use traditional hand-crafted features. However, this impressive performance is yet to be fully exploited in robotics. In this paper, we focus one specific problem that can benefit from the recent development of the CNN technology, i.e., we focus on using a pre-trained CNN model as a method of generating an image representation appropriate for visual loop closure detection in SLAM (simultaneous localization and mapping). We perform a comprehensive evaluation of the outputs at the intermediate layers of a CNN as image descriptors, in comparison with state-of-the-art image descriptors, in terms of their ability to match images for detecting loop closures. The main conclusions of our study include: (a) CNN-based image representations perform comparably to state-of-the-art hand- crafted competitors in environments without significant lighting change, (b) they outperform state-of-the-art competitors when lighting changes significantly, and (c) they are also significantly faster to extract than the state-of-the-art hand-crafted features even on a conventional CPU and are two orders of magnitude faster on an entry-level GPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: