11750

Literature review: Build and Travel KD-Tree with CUDA

M. Zhou
School of Computer Science, Carleton University, Ottawa, Canada K1S 5B6

@article{zhou2014literature,

   title={LITERATURE REVIEW: Build and Travel KD-Tree with CUDA},

   author={Zhou, Meng},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1660

views

Ray tracing is an important and widely used tool in computer graphic. Entertainment and game industry have already bene t a lot from ray tracing. However, designers and end-users are forced to use off-line ray tracing tools for a long time due to the high computation load. In ray tracing, most of the computation is concentrated on whether hundreds of millions of rays hit objects in a scene. The naive algorithm calculate intersection between every ray and every triangle in the scene, which yields O(nm) computation time with n rays and m triangles. Inspired by binary search in one dimension space, one of the most popular and efficient improvement is partition the scene and build a tree to represent it. With this tree we have algorithm with O(nlog(m)). Travelling kd-tree with n rays is a natural parallel problem with each ray independent to each other. Thus if we have a kd-tree for a scene, the intersection problem can be calculated with O(log(m)) on n processors, which is an optimal algorithm with nm=log(m) accelerator. In practice, the accelerator is much lower due to hardware limitation, which we will discuss later. In this case kd-tree(k-dimensional tree)is developed with many other spacial partition algorithms. Despite animation with dynamic scene, kd-tree has been proved to be the most efficient data structure for static scene[2][16]. There are two tasks for us to use kd-tree in ray tracing: 1.Build kd-tree with triangle soup; 2.Travel kd-tree to nd intersection points. This paper will rst review state-of-the-art algorithms to build and travel kd-tree, both serial and parallel. Then implement an algorithm to build kd-tree with CUDA. Finally analyse this algorithm and try to make it faster and more memory efficient.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: