Real-Time All-in-Focus Video-Based Rendering Using A Network Camera Array
Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
3DTV Conference: The True Vision – Capture, Transmission and Display of 3D Video, 2008
@inproceedings{taguchi2008real,
title={Real-time all-in-focus video-based rendering using a network camera array},
author={Taguchi, Y. and Takahashi, K. and Naemura, T.},
booktitle={3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 2008},
pages={241–244},
year={2008},
organization={IEEE}
}
We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a Gigabit Ethernet. To render a high-quality novel view, we estimate a view-dependent per-pixel depth map in real-time by using a layered representation. The rendering algorithm is fully implemented on a GPU, which allows our system to efficiently use CPU and GPU independently and in parallel. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 fps depending on rendering parameters. Experimental results show high-quality images synthesized from various scenes.
August 9, 2011 by hgpu