{"id":8328,"date":"2012-10-06T11:45:06","date_gmt":"2012-10-06T08:45:06","guid":{"rendered":"http:\/\/hgpu.org\/?p=8328"},"modified":"2012-10-06T11:45:06","modified_gmt":"2012-10-06T08:45:06","slug":"techniques-for-mapping-synthetic-aperture-radar-processing-algorithms-to-multi-gpu-clusters","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=8328","title":{"rendered":"Techniques for Mapping Synthetic Aperture Radar Processing Algorithms to Multi-GPU Clusters"},"content":{"rendered":"<p>This paper presents a design for parallel processing of synthetic aperture radar (SAR) data using multiple Graphics Processing Units (GPUs). Our approach supports real-time reconstruction of a two-dimensional image from a matrix of echo pulses and their response values. Key to runtime efficiency is a partitioning scheme that divides the output image into tiles and the input matrix into a collection of pulses associated with each tile. Each image tile and its associated pulse set are distributed to thread blocks across multiple GPUs, which support parallel computation with near-optimal I\/O cost. The partial results are subsequently combined by a host CPU. Further efficiency is realized by the GPU&#8217;s low-latency thread scheduling, which masks memory access latencies. Performance analysis quantifies runtime as a function of input\/output parameters and number of GPUs. Experimental results were generated with 10 nVidia Tesla C2050 GPUs having maximum throughput of 972 Gflop\/s. Our approach scales well for output (reconstructed) image sizes from 2,048 x 2,048 pixels to 8,192 x 8,192 pixels.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This paper presents a design for parallel processing of synthetic aperture radar (SAR) data using multiple Graphics Processing Units (GPUs). Our approach supports real-time reconstruction of a two-dimensional image from a matrix of echo pulses and their response values. Key to runtime efficiency is a partitioning scheme that divides the output image into tiles and [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[36,89,3,41],"tags":[1787,14,106,512,20,1789,378],"class_list":["post-8328","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-nvidia-cuda","category-paper","category-signal-processing","tag-algorithms","tag-cuda","tag-gpu-cluster","tag-image-reconstruction","tag-nvidia","tag-signal-processing","tag-tesla-c2050"],"views":2394,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/8328","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8328"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/8328\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8328"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8328"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8328"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}