7162

GPUs as Storage System Accelerators

Samer Al-Kiswany, Abdullah Gharaibeh, Matei Ripeanu
Electrical and Computer Engineering Department, The University of British Columbia, 2332 Main Mall, Vancouver, BC V6T 1Z4, Canada
arXiv:1202.3669v1 [cs.DC] (16 Feb 2012)

@article{2012arXiv1202.3669A,

   author={Al-Kiswany}, S. and {Gharaibeh}, A. and {Ripeanu}, M.},

   title={"{GPUs as Storage System Accelerators}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1202.3669},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2012},

   month={feb},

   adsurl={http://adsabs.harvard.edu/abs/2012arXiv1202.3669A},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

1819

views

Massively multicore processors, such as Graphics Processing Units (GPUs), provide, at a comparable price, a one order of magnitude higher peak performance than traditional CPUs. This drop in the cost of computation, as any order-of-magnitude drop in the cost per unit of performance for a class of system components, triggers the opportunity to redesign systems and to explore new ways to engineer them to recalibrate the cost-to-performance relation. This project explores the feasibility of harnessing GPUs’ computational power to improve the performance, reliability, or security of distributed storage systems. In this context, we present the design of a storage system prototype that uses GPU offloading to accelerate a number of computationally intensive primitives based on hashing, and introduce techniques to efficiently leverage the processing power of GPUs. We evaluate the performance of this prototype under two configurations: as a content addressable storage system that facilitates online similarity detection between successive versions of the same file and as a traditional system that uses hashing to preserve data integrity. Further, we evaluate the impact of offloading to the GPU on competing applications’ performance. Our results show that this technique can bring tangible performance gains without negatively impacting the performance of concurrently running applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: