5914

Data Layout Pruning on GPU

Xinbiao Gan, Zhiying Wang, Li Shen, Qi Zhu
School of Computer, National University of Defense Technology, Changsha 410073, China
Applied Mathematics and Information Sciences, 5 (2), 129, 2011

@article{gan2011data,

   title={Data Layout Pruning on GPU},

   author={Gan, X. and Wang, Z. and Shen, L. and Zhu, Q.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

11427

views

This work is based on NVIDIA GTX 280 using CUDA (Computing Unified Device Architecture). We classify Dataset to be transferred into CUDA memory hierarchy into SW (shared and must write) and SR (shared but only read), and existing memory spaces (including shared memory, constant memory, texture memory and global memory) supported on CUDA-enabled GPU memory hierarchy are adopted to probe into best memory space for specified dataset. Conclusions from experimental results are that shared memory is proposed for SW; constant memory is advisable for SR and texture memory for SR with structured-grid dataset, especially for 2D, 3D regular grid.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: