Online video synthesis for removing occluding objects using multiple uncalibrated cameras via plane sweep algorithm
Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama-shi 223-8522, Japan
Third ACM/IEEE International Conference on Distributed Smart Cameras, 2009. ICDSC 2009
@inproceedings{hosokawa2009online,
title={Online video synthesis for removing occluding objects using multiple uncalibrated cameras via plane sweep algorithm},
author={Hosokawa, T. and Jarusirisawad, S. and Saito, H.},
booktitle={Distributed Smart Cameras, 2009. ICDSC 2009. Third ACM/IEEE International Conference on},
pages={1–8},
year={2009},
organization={IEEE}
}
We present an online rendering system which removes occluding objects in front of the objective scene from an input video using multiple videos taken with multiple cameras. To obtain geometrical relations between all cameras, we use projective grid space (PGS) defined by epipolar geometry between two basis cameras. Then we apply plane-sweep algorithm for generating depth image in the input camera. By excluding the area of occluding objects from the volume of the sweeping planes, we can generate the depthmap without the occluding objects. Using this depthmap, we can render the image without obstacles from all the multiple camera videos. Since we use graphics processing unit (GPU) for computation, we can achieve realtime online rendering using a normal spec PC and multiple USB cameras.
July 30, 2011 by hgpu