{"id":8501,"date":"2012-11-14T23:55:40","date_gmt":"2012-11-14T21:55:40","guid":{"rendered":"http:\/\/hgpu.org\/?p=8501"},"modified":"2012-11-14T23:55:40","modified_gmt":"2012-11-14T21:55:40","slug":"load-balanced-parallel-gpu-out-of-core-for-continuous-lod-model-visualization","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=8501","title":{"rendered":"Load Balanced Parallel GPU Out-of-Core for Continuous LOD Model Visualization"},"content":{"rendered":"<p>Rendering massive 3D models has been recognized as a challenging task. Due to the limited size of GPU memory, a massive model containing hundreds of millions of primitives cannot fit into most of modern GPUs. By applying parallel levelof-detail (LOD), as proposed in [1], only a portion of primitives instead of the whole are necessary to be streamed to the GPU. However, the low bandwidth in CPU-GPU communication is still the major bottleneck that prevents users from achieving highperformance rendering of massive 3D models on a single-GPU system. This paper explores a device-level parallel design that distributes the workloads for both GPU out-of-core and LOD processing in a multi-GPU multi-display system. Our multi-GPU out-of-core takes advantages of a load-balancing method and seamlessly integrates with the parallel LOD algorithm. By using frame-to-frame coherence, the overhead of data transferring is significantly reduced on each GPU. Our experiments show a highly interactive visualization of the &quot;Boeing 777&quot; airplane model that consists of over 332 million triangles and over 223 million vertices.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Rendering massive 3D models has been recognized as a challenging task. Due to the limited size of GPU memory, a massive model containing hundreds of millions of primitives cannot fit into most of modern GPUs. By applying parallel levelof-detail (LOD), as proposed in [1], only a portion of primitives instead of the whole are necessary [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,89,3],"tags":[1787,1782,14,20,974,182,144,134],"class_list":["post-8501","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-nvidia-cuda","category-paper","tag-algorithms","tag-computer-science","tag-cuda","tag-nvidia","tag-nvidia-geforce-gtx-580","tag-opengl","tag-rendering","tag-visualization"],"views":2352,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/8501","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8501"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/8501\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8501"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8501"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8501"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}