13406

Different Optimization Strategies and Performance Evaluation of Reduction on Multicore CUDA Architecture

Chhaya Patel
CE/IT Department, School of Engineering, RK University, Rajkot, India
International Journal of Engineering Research & Technology 04/2014; Vol. 3 (Issue 4):4, 2014

@article{patel2014different,

   title={Different Optimization Strategies and Performance Evaluation of Reduction on Multicore CUDA Architecture},

   author={Patel, Chhaya},

   journal={International Journal of Engineering},

   volume={3},

   number={4},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1450

views

The objective of this paper is to use different optimization strategies on multicore GPU architecture. Here for performance evaluation we have used parallel reduction algorithm. GPU on-chip shared memory is very fast than local and global memory. Shared memory latency is roughly 100x lower than non-cached global memory (make sure that there are no bank conflicts between the threads). We have used on chip shared memory for actual operations. One of the most common tasks in CUDA programming is to parallelize a loop using a kernel. To efficiently parallelize this, we need to launch enough threads to fully utilize the GPU. Instead of completely eliminating the loop when parallelizing the computation, we used grid-stride loop. By using grid stride loop we can process more data elements with limited number of threads. Grid stride loop provide scalability and thread reusability. A different strategies used to optimize parallel reduction algorithm. Here in this paper we tried to compare time and bandwidth of each version. In next each version we observed significant improvement.
Rating: 0.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: