12510

GPUdrive: Reconsidering Storage Accesses for GPU Acceleration

Mustafa Shihab, Karl Taht, Myoungsoo Jung
Computer Architecture and Memory Systems Laboratory, Department of Electrical Engineering, The University of Texas at Dallas
Fourth Workshop on Architectures and Systems for Big Data (ASBD 2014), 2014

@article{shihab2014gpudrive,

   title={GPUdrive: Reconsidering Storage Accesses for GPU Acceleration},

   author={Shihab, Mustafa and Taht, Karl and Jung, Myoungsoo},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1875

views

GPU-accelerated data-intensive applications demonstrate in excess of ten-fold speedups over CPU-only approaches. However, file-driven data movement between the CPU and the GPU can degrade performance and energy efficiencies by an order of magnitude as a result of traditional storage latency and ineffectual memory management. In this paper, we first analyze these two critical performance bottlenecks in GPU-accelerated data processing. We then study design considerations to reduce the overheads imposed by file-driven data movements in GPU computing. To address these issues, we prototype a low cost and low power all-flash array designed specifically for stream-based, I/O-rich workloads inherent in GPUs. As preliminary evaluation results, we demonstrate that our early-stage all-flash array solution can eliminate 60% ~ 90% performance discrepancy between memory-level GPU data transfer rates and storage access bandwidth by removing unnecessary data copies, memory management, and user/kernel-mode switching in the current system software stack. In addition, our all-flash array prototype consumes less dynamic power than the baseline storage array by 49%, on average.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: