8413

GPU-Based Asynchronous Global Optimization with Particle Swarm

M. P. Wachowiak, A. E. Lambe Foster
Department of Computer Science and Mathematics, Nipissing University, North Bay, ON Canada, P1B 8L7
J. Phys.: Conf. Ser. 385 012012, 2012

@inproceedings{wachowiak2012gpu,

   title={GPU-Based Asynchronous Global Optimization with Particle Swarm},

   author={Wachowiak, MP and Foster, A.E.L.},

   booktitle={Journal of Physics: Conference Series},

   volume={385},

   number={1},

   pages={012012},

   year={2012},

   organization={IOP Publishing}

}

Download Download (PDF)   View View   Source Source   

2485

views

The recent upsurge in research into general-purpose applications for graphics processing units (GPUs) has made low cost high-performance computing increasingly more accessible. Many global optimization algorithms that have previously benefited from parallel computation are now poised to take advantage of general-purpose GPU computing as well. In this paper, a global parallel asynchronous particle swarm optimization (PSO) approach is employed to solve three relatively complex, realistic parameter estimation problems in which each processor performs significant computation. Although PSO is readily parallelizable, memory bandwidth limitations with GPUs must be addressed, which is accomplished by minimizing communication among individual population members though asynchronous operations. The effect of asynchronous PSO on robustness and efficiency is assessed as a function of problem and population size. Experiments were performed with different population sizes on NVIDIA GPUs and on single-core CPUs. Results for successful trials exhibit marked speedup increases with the population size, indicating that more particles may be used to improve algorithm robustness while maintaining nearly constant time. This work also suggests that asynchronous operations on the GPU may be viable in stochastic population-based algorithms to increase efficiency without sacrificing the quality of the solutions.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: