5813

Hybrid coherence for scalable multicore architectures

John Henry Kelm
Computer Engineering, Graduate College, University of Illinois, Urbana-Champaign
University of Illinois, 2011

@article{lumetta2011hybrid,

   title={Hybrid coherence for scalable multicore architectures},

   author={Lumetta, S.S. and Frank, M.I. and Chen, D. and Patel, S.J.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

1532

views

This work describes a cache architecture and memory model for 1000+ core microprocessors. Our approach exploits workload characteristics and programming model assumptions to build a hybrid memory model that incorporates features from both software-managed coherence schemes and hardware cache coherence. The goal is to achieve the scalability found in compute accelerators, which support relaxed ordering of memory operations and programmer-managed coherence, while providing a programming interface that is akin to the strongly ordered cache coherent memory models found in general-purpose multicore processors today. The research presented in this dissertation supports the following thesis: To be scalable and programmable, future multicore systems require a cached, single-address space memory hierarchy. A hybrid software/hardware approach to coherence management is required to support such a memory hierarchy in 1000+ core processors and is achievable only by leveraging the characteristics of target applications and system software. We motivate a hybrid memory model and present our approach to addressing the challenges facing such a model. We discuss and evaluate a scalable 1024-core architecture, workloads that we see as targets for such an architecture, a memory model that relies on software management of coherence, and scalable hardware coherence schemes. Using these components, we develop the software and hardware support for a hybrid memory model. We demonstrate that our techniques can be used to reduce hardware design complexity, to increase software scalability, or to combine the two.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: