Shredder: GPU-Accelerated Incremental Storage and Computation

Pramod Bhatotia, Rodrigo Rodrigues, Akshat Verma
Max Planck Institute for Software Systems (MPI-SWS)
10th USENIX Conference on File and Storage Technologies (FAST ’12), 2012


   title={Shredder: GPU-Accelerated Incremental Storage and Computation},

   author={Bhatotia, P. and Rodrigues, R. and Verma, A.},

   booktitle={USENIX Conference on File and Storage Technologies (FAST)},



Download Download (PDF)   View View   Source Source   



Redundancy elimination using data deduplication and incremental data processing has emerged as an important technique to minimize storage and computation requirements in data center computing. In this paper, we present the design, implementation and evaluation of Shredder, a high performance content-based chunking framework for supporting incremental storage and computation systems. Shredder exploits the massively parallel processing power of GPUs to overcome the CPU bottlenecks of content-based chunking in a cost-effective manner. Unlike previous uses of GPUs, which have focused on applications where computation costs are dominant, Shredder is designed to operate in both compute-and dataintensive environments. To allow this, Shredder provides several novel optimizations aimed at reducing the cost of transferring data between host (CPU) and GPU, fully utilizing the multicore architecture at the host, and reducing GPU memory access latencies. With our optimizations, Shredder achieves a speedup of over 5X for chunking bandwidth compared to our optimized parallel implementation without a GPU on the same host system. Furthermore, we present two real world applications of Shredder: an extension to HDFS, which serves as a basis for incremental MapReduce computations, and an incremental cloud backup system. In both contexts, Shredder detects redundancies in the input data across successive runs, leading to significant savings in storage, computation, and end-to-end completion times.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: