{"id":7673,"date":"2012-05-29T23:28:36","date_gmt":"2012-05-29T20:28:36","guid":{"rendered":"http:\/\/hgpu.org\/?p=7673"},"modified":"2012-05-29T23:28:36","modified_gmt":"2012-05-29T20:28:36","slug":"hybrid-update-algorithms-for-regular-lattice-and-small-world-ising-models-on-graphical-processing-units","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=7673","title":{"rendered":"Hybrid Update Algorithms for Regular Lattice and Small-World Ising Models on Graphical Processing Units"},"content":{"rendered":"<p>Local and cluster Monte Carlo update algorithms offer a complex tradeoff space for optimising the performance of simulations of the Ising model. We systematically explore tradeoffs between hybrid Metropolis and Wolff cluster updates for the 3D Ising model using data-parallelism and graphical processing units. We investigate performance for both regular lattices as well as for small-world perturbations when the lattice becomes a generatised graph and locality can no longer be assumed. In spite of our use of customised Compute Unified Device Architecture (CUDA) code optimisations to implement it, we find the Wolff cluster update loses out in computational performance efficiency over the localised Metropolis algorithm systemically as the small-world rewiring parameter is increased. This manifests itself as a phase transition in the computational performance.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Local and cluster Monte Carlo update algorithms offer a complex tradeoff space for optimising the performance of simulations of the Ising model. We systematically explore tradeoffs between hybrid Metropolis and Wolff cluster updates for the 3D Ising model using data-parallelism and graphical processing units. We investigate performance for both regular lattices as well as for [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,89,3,12],"tags":[1787,14,71,20,1783],"class_list":["post-7673","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-nvidia-cuda","category-paper","category-physics","tag-algorithms","tag-cuda","tag-ising-model","tag-nvidia","tag-physics"],"views":2262,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/7673","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7673"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/7673\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7673"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7673"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7673"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}