8596

Hybrid Sample-based Surface Rendering

F. Reichl, M.G. Chajdas, K. Burger, R. Westermann
Technische Universitat Munchen
Proceedings of Vision, Modeling, and Visualization (VMV), 2012

@inproceedings{VMV12:47-54:2012,

   crossref={VMV12-proc},

   author={Florian Reichl and Matthaus G. Chajdas and Kai Burger and Rudiger Westermann },

   title={Hybrid Sample-based Surface Rendering},

   pages={47-54},

   URL={http://diglib.eg.org/EG/DL/PE/VMV/VMV12/047-054.pdf},

   DOI={10.2312/PE/VMV/VMV12/047-054}

}

Download Download (PDF)   View View   Source Source   

934

views

The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which high-resolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields, where our data structure is generated on-the-fly in CUDA on the GPU. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: