{"id":5625,"date":"2011-09-19T18:29:25","date_gmt":"2011-09-19T15:29:25","guid":{"rendered":"http:\/\/hgpu.org\/?p=5625"},"modified":"2011-09-19T18:29:25","modified_gmt":"2011-09-19T15:29:25","slug":"parallel-divide-and-evolve-experiments-with-openmp-on-a-multicore-machine","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=5625","title":{"rendered":"Parallel divide-and-evolve: experiments with OpenMP on a multicore machine"},"content":{"rendered":"<p>Multicore machines are becoming a standard way to speed up the system performance. After having instantiated the evolutionary metaheuristic DAEX with the forward search YAHSP planner, we investigate on the global parallelism approach, which exploits the intrinsic parallelism of the individual evaluation. This paper describes a parallel shared-memory version of the DAEYAHSP planning system using the OpenMP directive-based API. The parallelization scheme applies at a high level of abstraction and thus can be used by any evolutionary algorithm implemented with the Evolving Objects framework. The proof of concept is validated on a 48-core machine with two planning tasks extracted from the last international planning competition. Experiments show significant speedups with an increasing number of cores. This preliminary work opens an avenue for parallelizing any evolutionary algorithm developed with EO that would target multicore architectures.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Multicore machines are becoming a standard way to speed up the system performance. After having instantiated the evolutionary metaheuristic DAEX with the forward search YAHSP planner, we investigate on the global parallelism approach, which exploits the intrinsic parallelism of the individual evaluation. This paper describes a parallel shared-memory version of the DAEYAHSP planning system using [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,3],"tags":[1787,117,1782,613,748,70],"class_list":["post-5625","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-paper","tag-algorithms","tag-artificial-intelligence","tag-computer-science","tag-evolutionary-computations","tag-metaheuristics","tag-programming-techniques"],"views":2008,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/5625","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5625"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/5625\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5625"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5625"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5625"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}