4504

Real-time depth estimation for immersive 3D videoconferencing

I. Feldmann, W. Waizenegger, N. Atzpadin, O. Schreer
Heinrich-Hertz Institute, Fraunhofer Institute for Telecommunications, Berlin, Germany
3DTV-Conference: The True Vision – Capture, Transmission and Display of 3D Video (3DTV-CON), 2010

@inproceedings{feldmann2010real,

   title={Real-time depth estimation for immersive 3D videoconferencing},

   author={Feldmann, I. and Waizenegger, W. and Atzpadin, N. and Schreer, O.},

   booktitle={3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), 2010},

   pages={1–4},

   organization={IEEE},

   year={2010}

}

Download Download (PDF)   View View   Source Source   

600

views

The interest in immersive 3D video conference systems exists now for many years from both sides, the commercialization point of view as well as from a research perspective. Still, one of the major bottlenecks in this context is the computational complexity of the required algorithmic modules. This paper discusses this problem from a hardware point of view. We use new fast graphics board solutions, which allow high algorithmic parallelization in consumer PC environments on one hand and look at state-of-the-art powerful multi-core CPU processing capabilities on the other hand. We propose a novel scalable and high performance 3D acquisition framework for immersive 3D videoconference systems which takes benefit from both sides. In this way we are able to integrate complex computer vision algorithms, such as Visual Hull, multi-view stereo matching, segmentation, image rectification, lens distortion correction and virtual view synthesis as well as data encoding, network signaling and capturing for 16 HD cameras in one real-time framework. This paper is based on results and experiences of the European FP7 research project 3D Presence which aims to build a real-time three party and multi-user 3D videoconferencing system.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: