26721

cuPSO: GPU Parallelization for Particle Swarm Optimization Algorithms

Chuan-Chi Wang, Chun-Yen Ho, Chia-Heng Tu, Shih-Hao Hung
National Taiwan University, Taipei, Taiwan
arXiv:2205.01313 [cs.DC], (3 May 2022)

@misc{https://doi.org/10.48550/arxiv.2205.01313,

   doi={10.48550/ARXIV.2205.01313},

   url={https://arxiv.org/abs/2205.01313},

   author={Wang, Chuan-Chi and Ho, Chun-Yen and Tu, Chia-Heng and Hung, Shih-Hao},

   keywords={Distributed, Parallel, and Cluster Computing (cs.DC), Performance (cs.PF), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={cuPSO: GPU Parallelization for Particle Swarm Optimization Algorithms},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution Non Commercial Share Alike 4.0 International}

}

Particle Swarm Optimization (PSO) is a stochastic technique for solving the optimization problem. Attempts have been made to shorten the computation times of PSO based algorithms with massive threads on GPUs (graphic processing units), where thread groups are formed to calculate the information of particles and the computed outputs for the particles are aggregated and analyzed to find the best solution. In particular, the reduction-based method is considered as a common approach to handle the data aggregation and analysis for the calculated particle information. Nevertheless, based on our analysis, the reduction-based method would suffer from excessive memory accesses and thread synchronization overheads. In this paper, we propose a novel algorithm to alleviate the above overheads with the atomic functions. The threads within a thread group update the calculated results atomically to the intra-group data queue conditionally, which prevents the frequent accesses to the memory as done by the parallel reduction operations. Furthermore, we develop an enhanced version of the algorithm to alleviate the synchronization barrier among the thread groups, which is achieved by allowing the thread groups to run asynchronously and updating to the global, lock-protected variables occasionally if necessary. Our experimental results show that our proposed algorithm running on the Nvidia GPU is about 200 times faster than the serial version executed by the Intel Xeon CPU. Moreover, the novel algorithm outperforms the state-of-the-art method (the parallel reduction approach) by a factor of 2.2.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: