15040

Optimizing CUDA Shared Memory Usage

Shuang Gao, Gregory D. Peterson
EECS, University of Tennessee at Knoxville, Knoxville, USA
The International Conference for High Performance Computing, Networking, Storage and Analysis (SC’15), 2015

@article{gao2015optimizing,

   title={Optimizing CUDA Shared Memory Usage},

   author={Gao, Shuang and Peterson, Gregory D.},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

750

views

CUDA shared memory is fast, on-chip storage. However, the bank conflict issue could cause a performance bottleneck. Current NVIDIA Tesla GPUs support memory bank accesses with configurable bit-widths. While this feature provides an efficient bank mapping scheme for 32-bit and 64-bit data types, it becomes trickier to solve the bank conflict problem through manual code tuning. This paper presents a framework for automatic bank conflict analysis and optimization. Given static array access information, we calculate the conflict degree, and then provide optimized data access patterns. Basically, by searching among different combinations of inter- and intraarray padding, along with bank access bit-width configurations, we can efficiently reduce or eliminate bank conflicts. From RODINIA and the CUDA SDK we selected 13 kernels with bottlenecks due to shared memory bank conflicts. After using our approach, these benchmarks achieve 5%-35% improvement in runtime.
VN:F [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)
Optimizing CUDA Shared Memory Usage, 5.0 out of 5 based on 4 ratings

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: