Parallel Genetic Programming on Graphics Processing Units

Douglas A. Augusto, Heder S. Bernardino, Helio J.C. Barbosa
Laboratorio Nacional de Computacao Cientica (LNCC/MCTI), Rio de Janeiro, Brazil
Chapter in book "Genetic Programming – New Approaches and Successful Applications", Edited by Sebastian Ventura, ISBN 978-953-51-0809-2, 2012


   title={Parallel Genetic Programming on Graphics Processing Units},

   author={Augusto, D.A. and Bernardino, H.S. and Barbosa, H.J.C.},



Download Download (PDF)   View View   Source Source   



In program inference, the evaluation of how well a candidate solution solves a certain task is usually a computationally intensive procedure. Most of the time, the evaluation involves either submitting the program to a simulation process or testing its behavior on many input arguments; both situations may turn out to be very time-consuming. Things get worse when the optimization algorithm needs to evaluate a population of programs for several iterations, which is the case of genetic programming. Genetic programming (GP) is well-known for being a computationally demanding technique, which is a consequence of its ambitious goal: to automatically generate computer programs-in an arbitrary language-using virtually no domain knowledge. For instance, evolving a classifier, a program that takes a set of attributes and predicts the class they belong to, may be significantly costly depending on the size of the training dataset, that is, the amount of data needed to estimate the prediction accuracy of a single candidate classifier. Fortunately, GP is an inherently parallel paradigm, making it possible to easily exploit any amount of available computational units, no matter whether they are just a few or many thousands. Also, it usually does not matter whether the underlying hardware architecture can process simultaneously instructions and data ("MIMD") or only data ("SIMD"). Basically, GP exhibits three levels of parallelism: (i) population-level parallelism, when many populations evolve simultaneously; (ii) program-level parallelism, when programs are evaluated in parallel; and finally (iii) data-level parallelism, in which individual training points for a single program are evaluated simultaneously.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: