{"id":13751,"date":"2015-03-18T01:10:04","date_gmt":"2015-03-17T23:10:04","guid":{"rendered":"http:\/\/hgpu.org\/?p=13751"},"modified":"2015-03-18T01:10:04","modified_gmt":"2015-03-17T23:10:04","slug":"fast-radix-sort-for-sparse-linear-algebra-on-gpu","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=13751","title":{"rendered":"Fast Radix Sort for Sparse Linear Algebra on GPU"},"content":{"rendered":"<p>Fast sorting is an important step in many parallel algorithms, which require data ranking, ordering or partitioning. Parallel sorting is a widely researched subject, and many algorithms were developed in the past. In this paper, the focus is on implementing highly efficient sorting routines for the sparse linear algebra operations, such as parallel sparse matrix &#8211; matrix multiplication, or factorization. We propose a fast and simple to implement variant of parallel radix sort algorithm, suitable for GPU architecture. Extensive testing on both synthetic and real-world data shows, that our method outperforms other similar state-of-the-art implementations. Our implementation is bandwidth-efficient, as it achieves sorting rates comparable to the theoretical upper bound of memory bandwidth. We also present several interesting code optimizations relevant to GPU programming.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Fast sorting is an important step in many parallel algorithms, which require data ranking, ordering or partitioning. Parallel sorting is a widely researched subject, and many algorithms were developed in the past. In this paper, the focus is on implementing highly efficient sorting routines for the sparse linear algebra operations, such as parallel sparse matrix [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,89,90,3],"tags":[1787,1782,14,288,37,324,20,1306,1504,1793,9,421],"class_list":["post-13751","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-nvidia-cuda","category-opencl","category-paper","tag-algorithms","tag-computer-science","tag-cuda","tag-factorization","tag-linear-algebra","tag-matrix-multiplication","tag-nvidia","tag-nvidia-geforce-gtx-680","tag-nvidia-geforce-gtx-780","tag-opencl","tag-sorting","tag-sparse-matrix"],"views":2586,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/13751","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13751"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/13751\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13751"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=13751"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=13751"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}