Inter-block synchronization on a GPGPU

Shaoxuan Shen
College of Engineering and Computer Science, The Australian National University
The Australian National University, 2013


   title={Inter-block synchronization on a GPGPU},

   author={Shen, Shaoxuan},



Download Download (PDF)   View View   Source Source   



With the invention of multi-core processing unit technology, the graphics processing unit has evolved from single core graphic processing unit to multi-core programmable graphics processing units. Because of the GPUs’ architecture, people found that it is not only good at processing graphics related data, but also suitable for performing general purpose parallel computations. However, since global synchronization plays a big part in parallel programming, the adoption of the GPU as a general purpose computation device was limited because there was no explicit support for global synchronization on GPU without going back to the host CPU. Recently, Xiao and Feng proposed three GPU global synchronization algorithms, implemented them on NVIDIA’s CUDA platform and tested the performances. Also there exist a few other CUDA GPU synchronization implementations which are publicly available. However, none of them were implemented on AMD’s OpenCL platform. In this project, I implemented two GPU-based global synchronization algorithms, the Decentralized Barrier Algorithm and Centralized Barrier Algorithm, on the OpenCL platform. Performances of these two algorithms were recorded and compared against the traditional CPU based global synchronization. From experiment result, both GPU-based global synchronization approaches work better than CPU-based global synchronization when the synchronization method is used for more than one time in an application. Moreover, the performance of Decentralized Barrier Synchronization is higher than that of Centralized Barrier Synchronization. However, there are some restrictions when applying Decentralized Barrier Synchronization.
Rating: 1.5/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: