A readahead prefetcher for GPU file system layer
Technion – Israel Institute of Technology
arXiv:2109.05366 [cs.DC], (11 Sep 2021)
@misc{dimitsas2021readahead,
title={A readahead prefetcher for GPU file system layer},
author={Vasilis Dimitsas and Mark Silberstein},
year={2021},
eprint={2109.05366},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
GPUs are broadly used in I/O-intensive big data applications. Prior works demonstrate the benefits of using GPU-side file system layer, GPUfs, to improve the GPU performance and programmability in such workloads. However, GPUfs fails to provide high performance for a common I/O pattern where a GPU is used to process a whole data set sequentially. In this work, we propose a number of system-level optimizations to improve the performance of GPUfs for such workloads. We perform an in-depth analysis of the interplay between the GPU I/O access pattern, CPU-GPU PCIe transfers and SSD storage, and identify the main bottlenecks. We propose a new GPU I/O readahead prefetcher and a GPU page cache replacement mechanism to resolve them. The GPU I/O readahead prefetcher achieves more than 2× (geometric mean) higher bandwidth in a series of microbenchmarks compared to the original GPUfs. Furthermore, we evaluate the system on 14 applications derived from the RODINIA, PARBOIL and POLYBENCH benchmark suites. Our prefetching mechanism improves their execution time by up to 50% and their I/O bandwidth by 82% compared to the traditional CPU-only data transfer techniques.
September 19, 2021 by hgpu