A GPU-based Parallel Fireworks Algorithm for Optimization

Ke Ding, Shaoqiu Zheng, Ying Tan
Key Laboratory of Machine Perception (MOE), Peking University, Department of Machine Intelligence, School of Electronics Engineering and Computer Science, Peking University, Beijing, China
ACM Genetic and Evolutionary Computation Conference (GECCO 2013), 2013


   title={A GPU-based Parallel Fireworks Algorithm for Optimization},

   author={Ding, Ke and Zheng, Shaoqiu and Tan, Ying},



Download Download (PDF)   View View   Source Source   



Swarm intelligence algorithms have been widely used to solve difficult real world problems in both academic and engineering domains. Thanks to the inherent parallelism, various parallelized swarm intelligence algorithms have been proposed to speed up the optimization process, especially on the massively parallel processing architecture GPUs. However, conventional swarm intelligence algorithms are usually not designed specifically for the GPU architecture. They either can not fully exploit the tremendous computational power of GPUs or can not extend effectively as the problem scales go large. To address this shortcoming, a novel GPU-based Fireworks Algorithm (GPU-FWA) is proposed in this paper. In order to fully leverage GPUs’ high performance, GPU-FWA modified the original FWA so that it is more suitable for the GPU architecture. An implementation of GPU-FWA on the CUDA platform is presented and tested on a suite of well-known benchmark optimization problems. We extensively evaluated and compared GPU-FWA with FWA and PSO, in respect with both running time and solution quality, on a state-of-the-art commodity Fermi GPU. Experimental results demonstrate that GPU-FWA generally outperforms both FWA and PSO, and enjoys a significant speedup as high as 200x, compared to the sequential version of FWA and PSO running on an up-to-date CPU. GPU-FWA also enjoys the advantage of being easy to implement and scalable.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: