Data structure design for GPU based heterogeneous systems

Jens Breitbart
Research Group Programming Languages / Methodologies, Universitat Kassel, Kassel, Germany
International Conference on High Performance Computing & Simulation, 2009. HPCS ’09, p.44-51


   title={Data structure design for GPU based heterogeneous systems},

   author={Breitbart, J.},

   booktitle={High Performance Computing & Simulation, 2009. HPCS’09. International Conference on},




Download Download (PDF)   View View   Source Source   



This paper reports on our experience with data structure design for systems having both multiple CPU cores and a programmable graphics card. We integrate our data structures into the game-like application OpenSteerDemo and compare our data structures on two pc-systems. One System has a relative fast single core CPU and slower GPU, whereas the other one uses a high-end GPU with a slower multi core CPU. We design two grid based data structures for effectively solving the k-nearest neighbor problem. The static grid uses grid cells of uniform size, whereas the dynamic grid does not rely on given grid cells, but creates them at runtime. The static grid is designed for fast data structure creation, whereas the dynamic grid is designed to provide high GPU simulation performance. The high performance is achieved by taking advantage of the GPU memory system at the cost of a more complex construction algorithm. Our experiments show that with a slower CPU the algorithm for creating the dynamic grid becomes the bottleneck and no overall performance increase is possible compared to the static grid. This also holds true when the simulation is run with a faster CPU and a slower GPU, even though the breakeven point is different. We experimented with data structure creation on the GPU, but the performance of the static grid is not feasible. The dynamic grid cannot be created on the GPU due to the lack of recursive function support. We provide a dynamic grid creation algorithm, which uses multiple CPU cores. This algorithm is slower than its sequential counterpart due to the parallelization overhead.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: