Image selection for improved Multi-View Stereo

Alexander Hornung, Boyi Zeng, Leif Kobbelt
RWTH Aachen University, Aachen
IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008


   title={Image selection for improved multi-view stereo},

   author={Hornung, A. and Zeng, B. and Kobbelt, L.},

   booktitle={Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on},





Download Download (PDF)   View View   Source Source   



The Middlebury multi-view stereo evaluation clearly shows that the quality and speed of most multi-view stereo algorithms depends significantly on the number and selection of input images. In general, not all input images contribute equally to the quality of the output model, since several images may often contain similar and hence overly redundant visual information. This leads to unnecessarily increased processing times. On the other hand, a certain degree of redundancy can help to improve the reconstruction in more ldquodifficultrdquo regions of a model. In this paper we propose an image selection scheme for multi-view stereo which results in improved reconstruction quality compared to uniformly distributed views. Our method is tuned towards the typical requirements of current multi-view stereo algorithms, and is based on the idea of incrementally selecting images so that the overall coverage of a simultaneously generated proxy is guaranteed without adding too much redundant information. Critical regions such as cavities are detected by an estimate of the local photo-consistency and are improved by adding additional views. Our method is highly efficient, since most computations can be out-sourced to the GPU. We evaluate our method with four different methods participating in the Middlebury benchmark and show that in each case reconstructions based on our selected images yield an improved output quality while at the same time reducing the processing time considerably.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: