Optimizing CUDA Shared Memory Usage
EECS, University of Tennessee at Knoxville, Knoxville, USA
The International Conference for High Performance Computing, Networking, Storage and Analysis (SC’15), 2015
@article{gao2015optimizing,
title={Optimizing CUDA Shared Memory Usage},
author={Gao, Shuang and Peterson, Gregory D.},
year={2015}
}
CUDA shared memory is fast, on-chip storage. However, the bank conflict issue could cause a performance bottleneck. Current NVIDIA Tesla GPUs support memory bank accesses with configurable bit-widths. While this feature provides an efficient bank mapping scheme for 32-bit and 64-bit data types, it becomes trickier to solve the bank conflict problem through manual code tuning. This paper presents a framework for automatic bank conflict analysis and optimization. Given static array access information, we calculate the conflict degree, and then provide optimized data access patterns. Basically, by searching among different combinations of inter- and intraarray padding, along with bank access bit-width configurations, we can efficiently reduce or eliminate bank conflicts. From RODINIA and the CUDA SDK we selected 13 kernels with bottlenecks due to shared memory bank conflicts. After using our approach, these benchmarks achieve 5%-35% improvement in runtime.
December 4, 2015 by hgpu