10814

A Locality-Aware Memory Hierarchy for Energy-Efficient GPU Architectures

Minsoo Rhu, Michael Sullivan, Jingwen Leng, Mattan Erez
Department of Electrical and Computer Engineering, University of Texas at Austin
MICRO’13, 2013

@inproceedings{rhu2013lamar,

   author={Minsoo Rhu and Michael Sullivan and Jingwen Leng and Mattan Ere},

   title={A Locality-Aware Memory Hierarchy for Energy-Efficient GPU Architectures},

   booktitle={the Proceedings of MICRO’13},

   location={Davis, California},

   month={December},

   year={2013},

   pdf={/micro2013_lamar.pdf},

   mycat={conference}

}

Download Download (PDF)   View View   Source Source   

1118

views

As GPU’s compute capabilities grow, their memory hierarchy increasingly becomes a bottleneck. Current GPU memory hierarchies use coarse-grained memory accesses to exploit spatial locality, maximize peak bandwidth, simplify control, and reduce cache meta-data storage. These coarse-grained memory accesses, however, are a poor match for emerging GPU applications with irregular control flow and memory access patterns. Meanwhile, the massive multi-threading of GPUs and the simplicity of their cache hierarchies make CPU-specific memory system enhancements ineffective for improving the performance of irregular GPU applications. We design and evaluate a locality-aware memory hierarchy for throughput processors, such as GPUs. Our proposed design retains the advantages of coarse-grained accesses for spatially and temporally local programs while permitting selective fine-grained access to memory. By adaptively adjusting the access granularity, memory bandwidth and energy are reduced for data with low spatial/temporal locality without wasting control overheads or prefetching potential for data with high spatial locality. As such, our locality-aware memory hierarchy improves GPU performance, energy-efficiency, and memory throughput for a large range of applications.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Recent source codes

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: