9646

An Energy Efficient GPGPU Memory Hierarchy with Tiny Incoherent Caches

Alamelu Sankaranarayanan, Ehsan K. Ardestani, Jose Luis Briz, Jose Renau
Dept. of Computer Engineering, University of California Santa Cruz
International Symposium on Low Power Electronics and Design (ISLPED), 2013

@article{sankaranarayan2013anenergy,

   title={An Energy Efficient GPGPU Memory Hierarchy with Tiny Incoherent Caches},

   author={Sankaranarayanan, Alamelu and Ardestani, Ehsan K and Briz, Jose Luis and Renau, Jose},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1747

views

With progressive generations and the ever-increasing promise of computing power, GPGPUs have been quickly growing in size, and at the same time, energy consumption has become a major bottleneck for them. The first level data cache and the scratchpad memory are critical to the performance of a GPGPU, but they are extremely energy inefficient due to the large number of cores they need to serve. This problem could be mitigated by introducing a cache higher up in hierarchy that services fewer cores, but this introduces cache coherency issues that may become very significant, especially for a GPGPU with hundreds of thousands of in-flight threads. In this paper, we propose adding incoherent tinyCaches between each lane in an SM, and the first level data cache that is currently shared by all the lanes in an SM. In a normal multiprocessor, this would require hardware cache coherence between all the SM lanes capable of handling hundreds of thousands of threads. Our incoherent tinyCache architecture exploits certain unique features of the CUDA/OpenCL programming model to avoid complex coherence schemes. This tinyCache is able to filter out 62% of memory requests that would otherwise need to be serviced by the DL1G, and almost 81% of scratchpad memory requests, allowing us to achieve a 37% energy reduction in the on-chip memory hierarchy. We evaluate the tinyCache for different memory patterns and show that it is beneficial in most cases.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: