Real-time Multi-view Depth Generation Using CUDA Multi-GPU

Eunsang Ko, Yunseok Song, and Yo-Sung Ho
Gwangju Institute of Science and Technology (GIST) 123, Cheomdangwagi-ro, Buk-gu, Gwangju 500-712, Rep. of Korea
International Conference on Embedded Systems and Intelligent Technology (ICESIT), pp. 103-105, 2014


   title={Real-time Multi-view Depth Generation Using CUDA Multi-GPU},

   author={Ko, Eunsang and Song, Yunseok and Ho, Yo-Sung},



Download Download (PDF)   View View   Source Source   



In this paper, we propose a real-time multi-view depth generation method using compute unified device architecture (CUDA) multi-graphics processing units (GPU). The objective is to generate multi-view depth maps in real-time. We employ eight color cameras and three depth cameras. After capturing multi-view color and depth data, we warp the depth information to color camera positions. Then joint bilateral filtering (JBF) is performed to fill empty regions. Such a procedure is accelerated by CUDA which is one of general-purpose computing on GPU (GPGPU). As a result, depth maps of eight views are generated at a rate of 23 frames per second (fps) on a single GPU computer. When using a multi-GPU computer, depth generation at 34 fps was achieved.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: