Real-Time All-in-Focus Video-Based Rendering Using A Network Camera Array

Yuichi Taguchi, Keita Takahashi, Takeshi Naemura
Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
3DTV Conference: The True Vision – Capture, Transmission and Display of 3D Video, 2008


   title={Real-time all-in-focus video-based rendering using a network camera array},

   author={Taguchi, Y. and Takahashi, K. and Naemura, T.},

   booktitle={3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 2008},





Download Download (PDF)   View View   Source Source   



We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a Gigabit Ethernet. To render a high-quality novel view, we estimate a view-dependent per-pixel depth map in real-time by using a layered representation. The rendering algorithm is fully implemented on a GPU, which allows our system to efficiently use CPU and GPU independently and in parallel. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 fps depending on rendering parameters. Experimental results show high-quality images synthesized from various scenes.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: