{"id":9614,"date":"2013-06-19T23:36:19","date_gmt":"2013-06-19T20:36:19","guid":{"rendered":"http:\/\/hgpu.org\/?p=9614"},"modified":"2013-06-19T23:36:19","modified_gmt":"2013-06-19T20:36:19","slug":"parallel-asynchronous-modelization-and-execution-of-cholesky-algorithm-using-petri-nets","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=9614","title":{"rendered":"Parallel Asynchronous Modelization and Execution of Cholesky Algorithm using Petri Nets"},"content":{"rendered":"<p>Parallelization of algorithms with hard data dependencies has a lack of task synchronization. Synchronous parallel versions are simple to model and program, but inefficient in terms of scalability and processors use rate. The same problem for or Asynchronous versions with elemental static task scheduling. Efficient Asynchronous algorithms implements out of order execution and are complex to model and execute. In this paper we introduce Petri Nets as a tool that simplifies the modeling and execution of parallel asynchronous versions of these kind of algorithms, while using an efficient dynamic task scheduling implementation. The Cholesky factorization algorithm was used as testbed. Simulations was done as a proof of concept, based on real execution times on GPGPU&#8217;s, and shows excellent performances.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Parallelization of algorithms with hard data dependencies has a lack of task synchronization. Synchronous parallel versions are simple to model and program, but inefficient in terms of scalability and processors use rate. The same problem for or Asynchronous versions with elemental static task scheduling. Efficient Asynchronous algorithms implements out of order execution and are complex [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,89,3],"tags":[1787,1782,14,288,20,953,854],"class_list":["post-9614","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-nvidia-cuda","category-paper","tag-algorithms","tag-computer-science","tag-cuda","tag-factorization","tag-nvidia","tag-nvidia-geforce-gtx-470","tag-task-scheduling"],"views":2371,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/9614","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9614"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/9614\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9614"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9614"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9614"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}