ScatterAlloc: Massively Parallel Dynamic Memory Allocation for the GPU
Institute for Computer Graphics and Vision, Graz University of Technology
Proceedings of Innovative Parallel Computing (InPar’12), 2012
@article{steinberger2012scatteralloc,
title={ScatterAlloc: Massively Parallel Dynamic Memory Allocation for the GPU},
author={Steinberger, M. and Kenzel, M. and Kainz, B. and Schmalstieg, D.},
year={2012}
}
In this paper, we analyze the special requirements of a dynamic memory allocator that is designed for massively parallel architectures such as Graphics Processing Units (GPUs). We show that traditional strategies, which work well on CPUs, are not well suited for the use on GPUs and present the thorough design of ScatterAlloc, which can efficiently deal with hundreds of requests in parallel. Our allocator greatly reduces collisions and congestion by scattering memory requests based on hashing. We analyze ScatterAlloc in terms of allocation speed, data access time and fragmentation, and compare it to current state-of-the-art allocators, including the one provided with the NVIDIA CUDA toolkit. Our results show, that ScatterAlloc clearly outperforms these other approaches, yielding speed-ups between 10 to 100.
June 16, 2012 by hgpu