5347

Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera

F. Perez Nava, J.P. Luke
Departamento de Estadistica, Investigacion Operativa y Computacion, Universidad de La Laguna, 38271, Canary Islands, Spain
3DTV Conference: The True Vision – Capture, Transmission and Display of 3D Video, 2009

@article{nava2009simultaneous,

   title={Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera},

   author={Nava, F.P. and L{\"u}ke, JP},

   booktitle={3DTV Conference: The True Vision – Capture, Transmission and Display of 3D Video, 2009},

   year={2009},

   publisher={Citeseer}

}

Download Download (PDF)   View View   Source Source   

842

views

This paper presents a new technique to simultaneously estimate the depth map and the all-in-focus image of a scene, both at super-resolution, from a plenoptic camera. A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. It is composed of nxn microlenses and each of them generates a mxm image. Previous approaches to the depth and all-in- focus estimation problem processed the plenoptic image, generated a nxnxm focal stack, and were able to obtain a nxn depth map and all-in-focus image of the scene. This is a major drawback of the plenoptic camera approach to 3DTV since the total resolution of the camera n2m2 is divided by m2 to obtain a final resolution of n2 pixels. In our approach we propose a new super-resolution focal stack that is combined with multiview depth estimation. This technique allows a theoretical resolution of approximately n2m2/4 pixels. This is an o(m2) increment over previous approaches. From a practical point of view, in typical scenes we are able to increase 25 times the resolution of previous techniques. The time complexity of the algorithm makes possible to obtain real-time processing for 3DTV using appropriate hardware (GPU’s or FPGA’s) so it could be used in plenoptic video-cameras.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: