3246

Memory-Scalable GPU Spatial Hierarchy Construction

Qiming Hou, Xin Sun, Kun Zhou, Christian Lauterbach, Dinesh Manocha, Baining Guo
Tsinghua University
IEEE Transactions on Visualization and Computer Graphics (2010) Publisher: Published by the IEEE Computer Society

@article{hou2010memory,

   title={Memory-scalable gpu spatial hierarchy construction},

   author={Hou, Q. and Sun, X. and Zhou, K. and Lauterbach, C. and Manocha, D.},

   journal={IEEE Transactions on Visualization and Computer Graphics},

   issn={1077-2626},

   year={2010},

   publisher={Published by the IEEE Computer Society}

}

Download Download (PDF)   View View   Source Source   

1337

views

Recent GPU algorithms for constructing spatial hierarchies achieve promising performance for moderately complex models by using the BFS (breadth-first search) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order consumes excessive GPU memory, which becomes a serious issue. In this paper, we propose to use the PBFS (partial breadth-first search) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage to control the peak memory without CPU-GPU data transfer. We also develop memory allocation strategies to limit memory fragmentation. Our algorithm scales well with GPU memory and constructs kd-trees of millions of triangles at interactive rates within 1GB video memory, which is an order of magnitude more scalable than previous algorithms. The second algorithm is for out-of-core BVH (bounding volume hierarchy) construction for very large scenes. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration. Our algorithm can construct BVHs for 20M triangles, several times larger than previous GPU algorithms.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: