17848

Methods for GPU Acceleration of Big Data Applications

Reza Mokhtari
University of Toronto
University of Toronto, 2017

@phdthesis{mokhtari2017methods,

   title={Methods for GPU acceleration of Big Data applications},

   author={Mokhtari, Reza},

   year={2017}

}

Download Download (PDF)   View View   Source Source   

2146

views

Big Data applications are trivially parallelizable because they typically consist of simple and straightforward operations performed on a large number of independent input records. GPUs appear to be particularly well suited for this class of applications given their high degree of parallelism and high memory bandwidth. However, a number of issues severely complicate matters when trying to exploit GPUs to accelerate these applications. First, Big Data is often too large to fit in the GPU’s separate, limited-sized memory. Second, data transfers to and from GPUs are expensive because the bus that connects CPUs and GPUs has limited bandwidth and high latency; in practice, this often results in data-starved GPU cores. Third, GPU memory bandwidth is high only if data is layed out in memory such that the GPU threads accessing memory at the same time access adjacent memory; unfortunately this is not how Big Data is layed out in practice. This dissertation presents three solutions that help mitigate the above issues and enable GPU-acceleration of Big Data applications, namely BigKernel, a system that automates and optimizes CPU-GPU communication and GPU memory accesses, S-L1, a caching subsystem implemented in software, and a hash table designed for GPUs. Our key contributions include: (i) the first automatic CPU-GPU data management system that improves on the performance of state-of-the-art double-buffering scheme (a scheme that overlaps communication with computation to improve the GPU performance), (ii) a GPU level 1 cache implemented entirely in the software that outperforms hardware L1 when used by Big Data applications and, (iii) a GPU-based hash table (for storing key-value pairs popular in Big Data applications) that can grow beyond the available GPU memory yet retain reasonable performance. These solutions allow many existing Big Data applications to be ported to GPUs in a straightforward way and achieve performance gains of between 1.04X and 7.2X over the fastest CPU-based multi-threaded implementations.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: