{"id":12510,"date":"2014-07-16T00:03:07","date_gmt":"2014-07-15T21:03:07","guid":{"rendered":"http:\/\/hgpu.org\/?p=12510"},"modified":"2014-07-16T00:03:07","modified_gmt":"2014-07-15T21:03:07","slug":"gpudrive-reconsidering-storage-accesses-for-gpu-acceleration","status":"publish","type":"post","link":"https:\/\/hgpu.org\/?p=12510","title":{"rendered":"GPUdrive: Reconsidering Storage Accesses for GPU Acceleration"},"content":{"rendered":"<p>GPU-accelerated data-intensive applications demonstrate in excess of ten-fold speedups over CPU-only approaches. However, file-driven data movement between the CPU and the GPU can degrade performance and energy efficiencies by an order of magnitude as a result of traditional storage latency and ineffectual memory management. In this paper, we first analyze these two critical performance bottlenecks in GPU-accelerated data processing. We then study design considerations to reduce the overheads imposed by file-driven data movements in GPU computing. To address these issues, we prototype a low cost and low power all-flash array designed specifically for stream-based, I\/O-rich workloads inherent in GPUs. As preliminary evaluation results, we demonstrate that our early-stage all-flash array solution can eliminate 60% ~ 90% performance discrepancy between memory-level GPU data transfer rates and storage access bandwidth by removing unnecessary data copies, memory management, and user\/kernel-mode switching in the current system software stack. In addition, our all-flash array prototype consumes less dynamic power than the baseline storage array by 49%, on average.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>GPU-accelerated data-intensive applications demonstrate in excess of ten-fold speedups over CPU-only approaches. However, file-driven data movement between the CPU and the GPU can degrade performance and energy efficiencies by an order of magnitude as a result of traditional storage latency and ineffectual memory management. In this paper, we first analyze these two critical performance bottlenecks [&hellip;]<\/p>\n","protected":false},"author":351,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11,89,3],"tags":[1782,14,20,379,131],"class_list":["post-12510","post","type-post","status-publish","format-standard","hentry","category-computer-science","category-nvidia-cuda","category-paper","tag-computer-science","tag-cuda","tag-nvidia","tag-nvidia-geforce-gtx-480","tag-storage-system"],"views":2458,"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/12510","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/users\/351"}],"replies":[{"embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12510"}],"version-history":[{"count":0,"href":"https:\/\/hgpu.org\/index.php?rest_route=\/wp\/v2\/posts\/12510\/revisions"}],"wp:attachment":[{"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12510"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12510"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hgpu.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12510"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}