7667

Explicit Cache Management for Volume Ray-Casting on Parallel Architectures

Daniel Jonsson, Per Ganestam, Anders Ynnerman, Michael Doggett, Timo Ropinski
C-Research, Linkoping University, Sweden
Eurographics Symposium on Parallel Graphics and Visualization (EGPGV), 2012

@inproceedings{JGYDR12,

   author={J{"o}nsson, Daniel and Ganestam, Per and Ynnerman, Anders and Doggett, Michael and Ropinski, Timo},

   title={Explicit Cache Management for Volume Ray-Casting on Parallel Architectures},

   booktitle={EG Symposium on Parallel Graphics and Visualization (EGPGV)},

   year={2012},

   note={accepted}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1639

views

A major challenge when designing general purpose graphics hardware is to allow efficient access to texture data. Although different rendering paradigms vary with respect to their data access patterns, there is no flexibility when it comes to data caching provided by the graphics architecture. In this paper we focus on volume ray-casting, and show the benefits of algorithm-aware data caching. Our Marching Caches method exploits inter-ray coherence and thus utilizes the memory layout of the highly parallel processors by allowing them to share data through a cache which marches along with the ray front. By exploiting Marching Caches we can apply higher-order reconstruction and enhancement filters to generate more accurate and enriched renderings with an improved rendering performance. We have tested our Marching Caches with seven different filters, e.,g., Catmul-Rom, B-spline, ambient occlusion projection, and could show that a speed up of four times can be achieved compared to using the caching implicitly provided by the graphics hardware, and that the memory bandwidth to global memory could be reduced by orders of magnitude. Throughout the paper, we will introduce the Marching Cache concept, provide implementation details and discuss the performance and memory bandwidth impact when using different filters.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: