{"id":30263,"date":"2025-09-28T16:19:13","date_gmt":"2025-09-28T13:19:13","guid":{"rendered":"https:\/\/hgpu.org\/?p=30263"},"modified":"2025-09-28T16:19:13","modified_gmt":"2025-09-28T13:19:13","slug":"mojo-mlir-based-performance-portable-hpc-science-kernels-on-gpus-for-the-python-ecosystem","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=30263","title":{"rendered":"Mojo: MLIR-Based Performance-Portable HPC Science Kernels on GPUs for the Python Ecosystem"},"content":{"rendered":"<p>We explore the performance and portability of the novel Mojo language for scientific computing workloads on GPUs. As the first language based on the LLVM&#8217;s Multi-Level Intermediate Representation (MLIR) compiler infrastructure, Mojo aims to close performance and productivity gaps by combining Python&#8217;s interoperability and CUDA-like syntax for compile-time portable GPU programming. We target four scientific workloads: a seven-point stencil (memory-bound), BabelStream (memory-bound), miniBUDE (compute-bound), and Hartree-Fock (compute-bound with atomic operations); and compare their performance against vendor baselines on NVIDIA H100 and AMD MI300A GPUs. We show that Mojo&#8217;s performance is competitive with CUDA and HIP for memory-bound kernels, whereas gaps exist on AMD GPUs for atomic operations and for fast-math compute-bound kernels on both AMD and NVIDIA GPUs. Although the learning curve and programming requirements are still fairly low-level, Mojo can close significant gaps in the fragmented Python ecosystem in the convergence of scientific computing and AI.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We explore the performance and portability of the novel Mojo language for scientific computing workloads on GPUs. As the first language based on the LLVM&#8217;s Multi-Level Intermediate Representation (MLIR) compiler infrastructure, Mojo aims to close performance and productivity gaps by combining Python&#8217;s interoperability and CUDA-like syntax for compile-time portable GPU programming. We target four scientific [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11,89,3],"tags":[1733,2150,7,955,1782,14,2063,1682,20,2132,176,513,2167],"class_list":["post-30263","post","type-post","status-publish","format-standard","hentry","category-computer-science","category-nvidia-cuda","category-paper","tag-ai","tag-amd-radeon-instinct-mi300a","tag-ati","tag-compilers","tag-computer-science","tag-cuda","tag-hip","tag-hpc","tag-nvidia","tag-nvidia-h100","tag-package","tag-python","tag-rocm"],"views":1877,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/30263","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=30263"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/30263\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=30263"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=30263"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=30263"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}