{"id":19719,"date":"2020-02-16T17:56:13","date_gmt":"2020-02-16T15:56:13","guid":{"rendered":"https:\/\/hgpu.org\/?p=19719"},"modified":"2020-02-16T17:56:13","modified_gmt":"2020-02-16T15:56:13","slug":"ism2-optimizing-irregular-shaped-matrix-matrix-multiplication-on-gpus","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=19719","title":{"rendered":"ISM2: Optimizing Irregular-Shaped Matrix-Matrix Multiplication on GPUs"},"content":{"rendered":"<p>Linear algebra operations have been widely used in big data analytics and scientific computations. Many works have been done on optimizing linear algebra operations on GPUs with regular-shaped input. However, few works are focusing on fully utilizing GPU resources when the input is not regular-shaped. Current optimizations lack of considering fully utilizing the memory bandwidth and computing power, therefore they could only achieve sub-optimal performance. In this paper, we propose two efficient irregular-shaped matrix-matrix multiplication (GEMM) algorithms on GPUs, called TSM2 and ISM2. Both of them focus on optimizing GEMMs with various input sizes where at least one of the matrices is tall-and-skinny. We implement our proposed algorithms and test on several modern Nvidia GPU micro-architectures. Experiments show that compared to state of the art, our TSM2 speeds up the computation by 1.1x~3x and improves the memory bandwidth utilization and computing power utilization by 8%~47.6% and 7%~37.3%, respectively, when the size of regular matrix is relatively large or medium. Moreover, our ISM2 speeds up the GEMM by 1.1x~3.5x and improve the memory bandwidth utilization by up to 55% when the size of regular matrix is relatively small.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Linear algebra operations have been widely used in big data analytics and scientific computations. Many works have been done on optimizing linear algebra operations on GPUs with regular-shaped input. However, few works are focusing on fully utilizing GPU resources when the input is not regular-shaped. Current optimizations lack of considering fully utilizing the memory bandwidth [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,89,3],"tags":[1787,1782,14,1419,37,324,20,1543,1871,1931],"class_list":["post-19719","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-nvidia-cuda","category-paper","tag-algorithms","tag-computer-science","tag-cuda","tag-gemm","tag-linear-algebra","tag-matrix-multiplication","tag-nvidia","tag-tesla-k40","tag-tesla-m40","tag-tesla-p100"],"views":2085,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/19719","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=19719"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/19719\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=19719"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=19719"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=19719"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}