Revisiting Query Performance in GPU Database Systems
Microsoft Gray Systems Lab
arXiv:2302.00734 [cs.DB], (1 Feb 2023)
@misc{https://doi.org/10.48550/arxiv.2302.00734,
doi={10.48550/ARXIV.2302.00734},
url={https://arxiv.org/abs/2302.00734},
author={Cao, Jiashen and Sen, Rathijit and Interlandi, Matteo and Arulraj, Joy and Kim, Hyesoon},
keywords={Databases (cs.DB), Hardware Architecture (cs.AR), FOS: Computer and information sciences, FOS: Computer and information sciences},
title={Revisiting Query Performance in GPU Database Systems},
publisher={arXiv},
year={2023},
copyright={arXiv.org perpetual, non-exclusive license}
}
GPUs offer massive compute parallelism and high-bandwidth memory accesses. GPU database systems seek to exploit those capabilities to accelerate data analytics. Although modern GPUs have more resources (e.g., higher DRAM bandwidth) than ever before, judicious choices for query processing that avoid wasteful resource allocations are still advantageous. Database systems can save GPU runtime costs through just-enough resource allocation or improve query throughput with concurrent query processing by leveraging new GPU capabilities, such as Multi-Instance GPU (MIG). In this paper we do a cross-stack performance and resource utilization analysis of five GPU database systems. We study both database-level and micro-architectural aspects, and offer recommendations to database developers. We also demonstrate how to use and extend the traditional roofline model to identify GPU resource bottlenecks. This enables users to conduct what-if analysis to forecast performance impact for different resource allocation or the degree of concurrency. Our methodology addresses a key user pain point in selecting optimal configurations by removing the need to do exhaustive testing for a multitude of resource configurations.
February 5, 2023 by hgpu