7033

TAP: A TLP-Aware Cache Management Policy for a CPU-GPU Heterogeneous Architecture

Jaekyu Lee, Hyesoon Kim
School of Computer Science, Georgia Institute of Technology
The 18th IEEE International Symposium on High Performance Computer Architecture (HPCA-18), 2012

@article{kim2012tap,

   title={TAP: A TLP-Aware Cache Management Policy for a CPU-GPU Heterogeneous Architecture},

   author={Kim, J.L.H.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

1842

views

Combining CPUs and GPUs on the same chip has become a popular architectural trend. However, these heterogeneous architectures put more pressure on shared resource management. In particular, managing the lastlevel cache (LLC) is very critical to performance. Lately, many researchers have proposed several shared cache management mechanisms, including dynamic cache partitioning and promotion-based cache management, but no cache management work has been done on CPU-GPU heterogeneous architectures. Sharing the LLC between CPUs and GPUs brings new challenges due to the different characteristics of CPU and GPGPU applications. Unlike most memory-intensive CPU benchmarks that hide memory latency with caching, many GPGPU applications hide memory latency by combining thread-level parallelism (TLP) and caching. In this paper, we propose a TLP-aware cache management policy for CPU-GPU heterogeneous architectures. We introduce a core-sampling mechanism to detect how caching affects the performance of a GPGPU application. Inspired by previous cache management schemes, Utilitybased Cache Partitioning (UCP) and Re-Reference Interval Prediction (RRIP), we propose two new mechanisms: TAPUCP and TAP-RRIP. TAP-UCP improves performance by 5% over UCP and 11% over LRU on 152 heterogeneous workloads, and TAP-RRIP improves performance by 9% over RRIP and 12% over LRU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: