A Detailed GPU Cache Model Based on Reuse Distance Theory
Eindhoven University of Technology
The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA ’14), 2014
@article{nugteren2014detailed,
title={A Detailed GPU Cache Model Based on Reuse Distance Theory},
author={Nugteren, Cedric and van den Braak, Gert-Jan and Corporaal, Henk and Bal, Henri},
year={2014}
}
As modern GPUs rely partly on their on-chip memories to counter the imminent off-chip memory wall, the efficient use of their caches has become important for performance and energy. However, optimising cache locality systematically requires insight into and prediction of cache behaviour. On sequential processors, stack distance or reuse distance theory is a well-known means to model cache behaviour. However, it is not straightforward to apply this theory to GPUs, mainly because of the parallel execution model and fine-grained multi-threading. This work extends reuse distance to GPUs by modelling: 1) the GPU’s hierarchy of threads, warps, threadblocks, and sets of active threads, 2) conditional and non-uniform latencies, 3) cache associativity, 4) miss-status holding-registers, and 5) warp divergence. We implement the model in C++ and extend the Ocelot GPU emulator to extract lists of memory addresses. We compare our model with measured cache miss rates for the Parboil and PolyBench/GPU benchmark suites, showing a mean absolute error of 6% and 8% for two cache configurations. We show that our model is faster and even more accurate compared to the GPGPU-Sim simulator.
January 29, 2014 by hgpu