In-memory database acceleration on FPGAs: a survey
Delft University of Technology, Delft, The Netherlands
The VLDB Journal, 2019
@article{fang2019memory,
title={In-memory database acceleration on FPGAs: a survey},
author={Fang, Jian and Mulder, Yvo TB and Hidders, Jan and Lee, Jinho and Hofstee, H Peter},
journal={The VLDB Journal},
pages={1–27},
year={2019},
publisher={Springer}
}
While FPGAs have seen prior use in database systems, in recent years interest in using FPGA to accelerate databases has declined in both industry and academia for the following three reasons. First, specifically for in-memory databases, FPGAs integrated with conventional I/O provide insufficient bandwidth, limiting performance. Second, GPUs, which can also provide high throughput, and are easier to program, have emerged as a strong accelerator alternative. Third, programming FPGAs required developers to have full-stack skills, from high-level algorithm design to low-level circuit implementations. The good news is that these challenges are being addressed. New interface technologies connect FPGAs into the system at main-memory bandwidth and the latest FPGAs provide local memory competitive in capacity and bandwidth with GPUs. Ease of programming is improving through support of shared coherent virtual memory between the host and the accelerator, support for higher-level languages, and domain-specific tools to generate FPGA designs automatically. Therefore, this paper surveys using FPGAs to accelerate in-memory database systems targeting designs that can operate at the speed of main memory.
November 3, 2019 by hgpu