12906

A Framework for the Volumetric Integration of Depth Images

Victor Adrian Prisacariu, Olaf Kahler, Ming Ming Cheng, Julien Valentin, Philip H.S. Torr, Ian D. Reid, David W. Murray
University of Oxford
arXiv:1410.0925 [cs.CV], (3 Oct 2014)

@article{2014arXiv1410.0925P,

   author={Prisacariu}, V.~A. and {K{"a}hler}, O. and {Cheng}, M.~M. and {Valentin}, J. and {Torr}, P.~H.~S. and {Reid}, I.~D. and {Murray}, D.~W.},

   title={"{A Framework for the Volumetric Integration of Depth Images}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1410.0925},

   primaryClass={"cs.CV"},

   keywords={Computer Science – Computer Vision and Pattern Recognition},

   year={2014},

   month={oct},

   adsurl={http://adsabs.harvard.edu/abs/2014arXiv1410.0925P},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1754

views

Volumetric models have become a popular representation for 3D scenes in recent years. One of the breakthroughs leading to their popularity was KinectFusion, where the focus is on 3D reconstruction using RGB-D sensors. However, monocular SLAM has since also been tackled with very similar approaches. Representing the reconstruction volumetrically as a truncated signed distance function leads to most of the simplicity and efficiency that can be achieved with GPU implementations of these systems. However, this representation is also memory-intensive and limits the applicability to small scale reconstructions. Several avenues have been explored for overcoming this limitation. With the aim of summarizing them and providing for a fast and flexible 3D reconstruction pipeline, we propose a new, unifying framework called InfiniTAM. The core idea is that individual steps like camera tracking, scene representation and integration of new data can easily be replaced and adapted to the needs of the user. Along with the framework we also provide a set of components for scalable reconstruction: two implementations of camera trackers, based on RGB data and on depth data, two representations of the 3D volumetric data, a dense volume and one based on hashes of subblocks, and an optional module for swapping subblocks in and out of the typically limited GPU memory.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: