Vortex: Overcoming Memory Capacity Limitations in GPU-Accelerated Large-Scale Data Analytics
University of Michigan, Ann Arbor, Michigan, USA
arXiv:2502.09541 [cs.DB], (13 Feb 2025)
@misc{yuan2025vortexovercomingmemorycapacity,
title={Vortex: Overcoming Memory Capacity Limitations in GPU-Accelerated Large-Scale Data Analytics},
author={Yichao Yuan and Advait Iyer and Lin Ma and Nishil Talati},
year={2025},
eprint={2502.09541},
archivePrefix={arXiv},
primaryClass={cs.DB},
url={https://arxiv.org/abs/2502.09541}
}
Despite the high computational throughput of GPUs, limited memory capacity and bandwidth-limited CPU-GPU communication via PCIe links remain significant bottlenecks for accelerating large-scale data analytics workloads. This paper introduces Vortex, a GPU-accelerated framework designed for data analytics workloads that exceed GPU memory capacity. A key aspect of our framework is an optimized IO primitive that leverages all available PCIe links in multi-GPU systems for the IO demand of a single target GPU. It routes data through other GPUs to such target GPU that handles IO-intensive analytics tasks. This approach is advantageous when other GPUs are occupied with compute-bound workloads, such as popular AI applications that typically underutilize IO resources. We also introduce a novel programming model that separates GPU kernel development from IO scheduling, reducing programmer burden and enabling GPU code reuse. Additionally, we present the design of certain important query operators and discuss a late materialization technique based on GPU’s zero-copy memory access. Without caching any data in GPU memory, Vortex improves the performance of the state-of-the-art GPU baseline, Proteus, by 5.7x on average and enhances price performance by 2.5x compared to a CPU-based DuckDB baseline.
February 16, 2025 by hgpu