{"id":16914,"date":"2017-01-16T23:59:50","date_gmt":"2017-01-16T21:59:50","guid":{"rendered":"http:\/\/hgpu.org\/?p=16914"},"modified":"2017-01-16T23:59:50","modified_gmt":"2017-01-16T21:59:50","slug":"an-n-log-n-parallel-fast-direct-solver-for-kernel-matrices","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=16914","title":{"rendered":"An N log N Parallel Fast Direct Solver for Kernel Matrices"},"content":{"rendered":"<p>Kernel matrices appear in machine learning and non-parametric statistics. Given N points in d dimensions and a kernel function that requires $mathcal{O}(d)$ work to evaluate, we present an $mathcal{O}(dNlog N)$-work algorithm for the approximate factorization of a regularized kernel matrix, a common computational bottleneck in the training phase of a learning task. With this factorization, solving a linear system with a kernel matrix can be done with $mathcal{O}(Nlog N)$ work. Our algorithm only requires kernel evaluations and does not require that the kernel matrix admits an efficient global low rank approximation. Instead our factorization only assumes low-rank properties for the off-diagonal blocks under an appropriate row and column ordering. We also present a hybrid method that, when the factorization is prohibitively expensive, combines a partial factorization with iterative methods. As a highlight, we are able to approximately factorize a dense 11M*11M kernel matrix in 2 minutes on 3,072 x86 &quot;Haswell&quot; cores and a 4.5M*4.5M matrix in 1 minute using 4,352 &quot;Knights Landing&quot; cores.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Kernel matrices appear in machine learning and non-parametric statistics. Given N points in d dimensions and a kernel function that requires $mathcal{O}(d)$ work to evaluate, we present an $mathcal{O}(dNlog N)$-work algorithm for the approximate factorization of a regularized kernel matrix, a common computational bottleneck in the training phase of a learning task. With this factorization, [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,11,3],"tags":[1787,659,1782,288,1483,1025,176,67],"class_list":["post-16914","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-computer-science","category-paper","tag-algorithms","tag-computational-complexity","tag-computer-science","tag-factorization","tag-intel-xeon-phi","tag-machine-learning","tag-package","tag-performance"],"views":2413,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/16914","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=16914"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/16914\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=16914"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=16914"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=16914"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}