{"id":5347,"date":"2011-09-01T12:34:53","date_gmt":"2011-09-01T09:34:53","guid":{"rendered":"http:\/\/hgpu.org\/?p=5347"},"modified":"2011-09-01T12:34:53","modified_gmt":"2011-09-01T09:34:53","slug":"simultaneous-estimation-of-super-resolved-depth-and-all-in-focus-images-from-a-plenoptic-camera","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=5347","title":{"rendered":"Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera"},"content":{"rendered":"<p>This paper presents a new technique to simultaneously estimate the depth map and the all-in-focus image of a scene, both at super-resolution, from a plenoptic camera. A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. It is composed of nxn microlenses and each of them generates a mxm image. Previous approaches to the depth and all-in- focus estimation problem processed the plenoptic image, generated a nxnxm focal stack, and were able to obtain a nxn depth map and all-in-focus image of the scene. This is a major drawback of the plenoptic camera approach to 3DTV since the total resolution of the camera n2m2 is divided by m2 to obtain a final resolution of n2 pixels. In our approach we propose a new super-resolution focal stack that is combined with multiview depth estimation. This technique allows a theoretical resolution of approximately n2m2\/4 pixels. This is an o(m2) increment over previous approaches. From a practical point of view, in typical scenes we are able to increase 25 times the resolution of previous techniques. The time complexity of the algorithm makes possible to obtain real-time processing for 3DTV using appropriate hardware (GPU&#8217;s or FPGA&#8217;s) so it could be used in plenoptic video-cameras.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This paper presents a new technique to simultaneously estimate the depth map and the all-in-focus image of a scene, both at super-resolution, from a plenoptic camera. A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. It is composed of nxn microlenses and each [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,33,3],"tags":[1787,1786],"class_list":["post-5347","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-image-processing","category-paper","tag-algorithms","tag-image-processing"],"views":2177,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/5347","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5347"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/5347\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5347"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5347"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5347"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}