{"id":1546,"date":"2010-11-20T09:01:52","date_gmt":"2010-11-20T09:01:52","guid":{"rendered":"http:\/\/hgpu.org\/?p=1546"},"modified":"2010-11-20T09:01:52","modified_gmt":"2010-11-20T09:01:52","slug":"real-time-computation-of-photic-extremum-lines-pels","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=1546","title":{"rendered":"Real-time computation of photic extremum lines (PELs)"},"content":{"rendered":"<p>Photic extremum lines (PELs) are view-dependent, object-space feature lines that characterize the significant changes of surface illumination. Though very effective for conveying 3D shapes, PELs are computationally expensive due to the heavy involvement of the third- and fourth-order derivatives. Also, they require the user to manually place a few auxiliary lights to depict the model details, which is usually tedious work. To overcome these challenges, we present a novel computational framework that improves both the speed and quality of PELs. First, we derive a simple, closed-form formula of gradient operator such that various orders of derivatives can be computed efficiently and in parallel using graphics processing units (GPUs). The GPU-based PEL extraction algorithm is one order of magnitude faster than the original one. Second, we propose to extract PELs from various non-photorealistic shadings that not only depict the overall shape but also bring out the details at different frequencies simultaneously. As a result, the user can easily control the relative emphasis to different scales and obtain the desired line drawing results. We demonstrate the improved PELs on a wide range of real-world objects.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Photic extremum lines (PELs) are view-dependent, object-space feature lines that characterize the significant changes of surface illumination. Though very effective for conveying 3D shapes, PELs are computationally expensive due to the heavy involvement of the third- and fourth-order derivatives. Also, they require the user to manually place a few auxiliary lights to depict the model [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[180,11,3],"tags":[1797,444,1782,20,251],"class_list":["post-1546","post","type-post","status-publish","format-standard","hentry","category-3d-graphics-and-realism","category-computer-science","category-paper","tag-3d-graphics-and-realism","tag-cg","tag-computer-science","tag-nvidia","tag-nvidia-geforce-gtx-285"],"views":2050,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/1546","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1546"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/1546\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1546"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1546"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1546"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}