{"id":5376,"date":"2011-09-04T08:59:44","date_gmt":"2011-09-04T05:59:44","guid":{"rendered":"http:\/\/hgpu.org\/?p=5376"},"modified":"2011-09-04T08:59:44","modified_gmt":"2011-09-04T05:59:44","slug":"high-speed-view-interpolation-for-tele-teaching-and-tele-conferencing","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=5376","title":{"rendered":"High speed view interpolation for tele-teaching and tele-conferencing"},"content":{"rendered":"<p>This paper presents an algorithm to generate an interpolated view between two camera viewpoints in a fast and automatic way (6-7 fps on a PentIV @ 2.6 GHz, Geforce FX AGP 4). Nothing more than a desktop PC and a set of low end consumer grade cameras are needed to simulate the video stream of any intermediate camera. Parallel use of the GPU (&#8216;plane sweep&#8217; algorithm) and the CPU (&#8216;min-cut\/max-flow&#8217; regularisation algorithm) is made to calculate the depth values. The final interpolations for any intermediate camera position are obtained by a projectively correct blended warp of the input images on a 3D mesh. Limited extrapolation is also feasible. The goal is to develop more advanced tele-teaching and videoconferencing environments, and this without the need of many cameras. Camera movements can be simulated and the best view can be selected whether this is recorded by a real camera or not. Compared to putting a human editor in control, the cost decreases dramatically, without losing all the added value of video stream editing.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This paper presents an algorithm to generate an interpolated view between two camera viewpoints in a fast and automatic way (6-7 fps on a PentIV @ 2.6 GHz, Geforce FX AGP 4). Nothing more than a desktop PC and a set of low end consumer grade cameras are needed to simulate the video stream of [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,3],"tags":[1787,1782,20,828,1103],"class_list":["post-5376","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-paper","tag-algorithms","tag-computer-science","tag-nvidia","tag-nvidia-geforce-4","tag-stereo-vision"],"views":2338,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/5376","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5376"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/5376\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5376"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5376"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5376"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}