GGArray: A Dynamically Growable GPU Array
Instituto de Informatica, Universidad Austral de Chile, Valdivia, Chile
arXiv:2209.00103 [cs.DC]
@misc{https://doi.org/10.48550/arxiv.2209.00103,
doi={10.48550/ARXIV.2209.00103},
url={https://arxiv.org/abs/2209.00103},
author={meneses, Enzo and Navarro, Cristóbal A. and Ferrada, Héctor},
keywords={Distributed, Parallel, and Cluster Computing (cs.DC), FOS: Computer and information sciences, FOS: Computer and information sciences},
title={GGArray: A Dynamically Growable GPU Array},
publisher={arXiv},
year={2022},
copyright={arXiv.org perpetual, non-exclusive license}
}
We present a dynamically Growable GPU array (GGArray) fully implemented in GPU that does not require synchronization with the host. The idea is to improve the programming of GPU applications that require dynamic memory, by offering a structure that does not require pre-allocating GPU VRAM for the worst case scenario. The GGArray is based on the LFVector, by utilizing an array of them in order to take advantage of the GPU architecture and the synchronization offered by thread blocks. This structure is compared to other state of the art ones such as a pre-allocated static array and a semi-static array that needs to be resized through communication with the host. Experimental evaluation shows that the GGArray has a competitive insertion and resize performance, but it is slower for regular parallel memory accesses. Given the results, the GGArray is a potentially useful structure for applications with high uncertainty on the memory usage as well as applications that have phases, such as an insertion phase followed by a regular GPU phase. In such cases, the GGArray can be used for the first phase and then data can be flattened for the second phase in order to allow the classical GPU memory accesses which are faster. These results constitute a step towards achieving a parallel efficient C++ like vector for modern GPU architectures.
September 4, 2022 by hgpu