Different Optimization Strategies and Performance Evaluation of Reduction on Multicore CUDA Architecture
CE/IT Department, School of Engineering, RK University, Rajkot, India
International Journal of Engineering Research & Technology 04/2014; Vol. 3 (Issue 4):4, 2014
@article{patel2014different,
title={Different Optimization Strategies and Performance Evaluation of Reduction on Multicore CUDA Architecture},
author={Patel, Chhaya},
journal={International Journal of Engineering},
volume={3},
number={4},
year={2014}
}
The objective of this paper is to use different optimization strategies on multicore GPU architecture. Here for performance evaluation we have used parallel reduction algorithm. GPU on-chip shared memory is very fast than local and global memory. Shared memory latency is roughly 100x lower than non-cached global memory (make sure that there are no bank conflicts between the threads). We have used on chip shared memory for actual operations. One of the most common tasks in CUDA programming is to parallelize a loop using a kernel. To efficiently parallelize this, we need to launch enough threads to fully utilize the GPU. Instead of completely eliminating the loop when parallelizing the computation, we used grid-stride loop. By using grid stride loop we can process more data elements with limited number of threads. Grid stride loop provide scalability and thread reusability. A different strategies used to optimize parallel reduction algorithm. Here in this paper we tried to compare time and bandwidth of each version. In next each version we observed significant improvement.
January 30, 2015 by hgpu