A Parallel Depth-aided Exemplar-based Inpainting for Real-time View Synthesis on GPU
College of Information Science and Engineering, Hunan University, Changsha, China
EMCA, 2013
@article{tian2013parallel,
title={A Parallel Depth-aided Exemplar-based Inpainting for Real-time View Synthesis on GPU},
author={Tian, Zheng and Xu, Cheng and Deng, Xiaoyun},
year={2013}
}
Synthesizing new images from given image pair and their corresponding depth maps is an essential function for many 3D video applications. Exemplar-based inpainting methods have been proposed in recent years to be used to restore newly synthesized images by strategically filling the missing pixels which don’t have any references due to occlusion. Due to the prioritized filling process, the inpainting methods usually result in high computational complexity and can hardly reach real-time performance. In this paper, a parallel depth-aided inpainting method is proposed to address the efficiency issue of this kind of high performance algorithms. In order to reduce the computation, the proposed method searches for background pixels in a restricted search range on the reference images for effective context filling. Then a partially parallel strategy is proposed to speedup the inpainting process while maintaining its high restoration accuracy. Finally the method is implemented with CUDA on NVidia graphic card GTS450. The experiment results showed that the proposed method could produce the best on par results and is suitable for real-time multi-view image synthesis.
October 26, 2013 by hgpu