11748

Improving Cache Locality for GPU-based Volume Rendering

Y. Sugimoto, F. Ino, K. Hagihara
Nippon Telegraph and Telephone East Corporation, 19-2, Nishi-shinjuku 3-chome, Shinjuku, Tokyo 163-8019, Japan; Graduate School of Information Science and Technology, Osaka University, 1-5 Yamada-oka, Suita, Osaka 565-0871, Japan

@article{sugimotoimproving,

   title={Improving Cache Locality for GPU-based Volume Rendering},

   author={SUGIMOTO, Yuki and INOb, Fumihiko and HAGIHARA, Kenichi}

}

Download Download (PDF)   View View   Source Source   

2566

views

We present a cache-aware method for accelerating texture-based volume rendering on a graphics processing unit (GPU). Because a GPU has hierarchical architecture in terms of processing and memory units, cache optimization is important to maximize performance for memory-intensive applications. Our method localizes texture memory reference according to the location of the viewpoint and dynamically selects the width and height of thread blocks (TBs) so that each warp, which is a series of 32 threads processed simultaneously, can minimize memory access strides. We also incorporate transposed indexing of threads to perform TB-level cache optimization for specific viewpoints. Furthermore, we maximize TB size to exploit spatial locality with fewer resident TBs. For viewpoints with relatively large strides, we synchronize threads of the same TB at regular intervals to realize synchronous ray propagation. Experimental results indicate that our cache-aware method doubles the worst rendering performance compared to those provided by the CUDA and OpenCL software development kits.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: