MRPB: Memory Request Prioritization for Massively Parallel Processors
Princeton University
The 20th Int. Symp. on High Performance Computer Architecture (HPCA 2014), 2014
@article{jia2014mrpb,
title={MRPB: Memory Request Prioritization for Massively Parallel Processors},
author={Jia, Wenhao and Shaw, Kelly A and Martonosi, Margaret},
year={2014}
}
Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high performance for a broad range of programs. They are, however, complex to program, especially because of their intricate memory hierarchies with multiple address spaces. In response, modern GPUs have widely adopted caches, hoping to providing smoother reductions in memory access traffic and latency. Unfortunately, GPU caches often have mixed or unpredictable performance impact due to cache contention that results from the high thread counts in GPUs. We propose the memory request prioritization buffer (MRPB) to ease GPU programming and improve GPU performance. This hardware structure improves caching efficiency of massively parallel workloads by applying two prioritization methods-request reordering and cache bypassing-to memory requests before they access a cache. MRPB then releases requests into the cache in a more cache-friendly order. The result is drastically reduced cache contention and improved use of the limited per-thread cache capacity. For a simulated 16KB L1 cache, MRPB improves the average performance of the entire PolyBench and Rodinia suites by 2.65x and 1.27x respectively, outperforming a state-of-the-art GPU cache management technique.
January 16, 2014 by hgpu