18850

Concurrent query processing in a GPU-based database system

Hao Li, Yi-Cheng Tu, Bo Zeng
Department of Computer Science and Engineering, University of South Florida, Tampa, FL, United States of America
PLoS ONE 14(4): e0214720, 2019

@article{10.1371/journal.pone.0214720,

   author={Li, Hao AND Tu, Yi-Cheng AND Zeng, Bo},

   journal={PLOS ONE},

   publisher={Public Library of Science},

   title={Concurrent query processing in a GPU-based database system},

   year={2019},

   month={04},

   volume={14},

   url={https://doi.org/10.1371/journal.pone.0214720},

   pages={1-26},

   number={4},

   doi={10.1371/journal.pone.0214720}

}

The unrivaled computing capabilities of modern GPUs meet the demand of processing massive amounts of data seen in many application domains. While traditional HPC systems support applications as standalone entities that occupy entire GPUs, there are GPU-based DBMSs where multiple tasks are meant to be run at the same time in the same device. To that end, system-level resource management mechanisms are needed to fully unleash the computing power of GPUs in large data processing, and there were some researches focusing on it. In our previous work, we explored the single compute-bound kernel modeling on GPUs under NVidia’s CUDA framework and provided an in-depth anatomy of the NVidia’s concurrent kernel execution mechanism (CUDA stream). This paper focuses on resource allocation of multiple GPU applications towards optimization of system throughput in the context of systems. Comparing to earlier studies of enabling concurrent tasks support on GPU such as MultiQx-GPU, we use a different approach that is to control the launching parameters of multiple GPU kernels as provided by compile-time performance modeling as a kernel-level optimization and also a more general pre-processing model with batch-level control to enhance performance. Specifically, we construct a variation of multi-dimensional knapsack model to maximize concurrency in a multi-kernel environment. We present an in-depth analysis of our model and develop an algorithm based on dynamic programming technique to solve the model. We prove the algorithm can find optimal solutions (in terms of thread concurrency) to the problem and bears pseudopolynomial complexity on both time and space. Such results are verified by extensive experiments running on our microbenchmark that consists of real-world GPU queries. Furthermore, solutions identified by our method also significantly reduce the total running time of the workload, as compared to sequential and MultiQx-GPU executions.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: