Complexity effective memory access scheduling for many-core accelerator architectures
Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
In MICRO 42: Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture (2009), pp. 34-44
@conference{yuan2010complexity,
title={Complexity effective memory access scheduling for many-core accelerator architectures},
author={Yuan, G.L. and Bakhoda, A. and Aamodt, T.M.},
booktitle={Microarchitecture, 2009. MICRO-42. 42nd Annual IEEE/ACM International Symposium on},
pages={34–44},
issn={1072-4451},
year={2010},
organization={IEEE}
}
Modern DRAM systems rely on memory controllers that employ out-of-order scheduling to maximize row access locality and bank-level parallelism, which in turn maximizes DRAM bandwidth. This is especially important in graphics processing unit (GPU) architectures, where the large quantity of parallelism places a heavy demand on the memory system. The logic needed for out-of-order scheduling can be expensive in terms of area, especially when compared to an in-order scheduling approach. In this paper, we propose a complexity-effective solution to DRAM request scheduling which recovers most of the performance loss incurred by a naive in-order first-in first-out (FIFO) DRAM scheduler compared to an aggressive out-of-order DRAM scheduler. We observe that the memory request stream from individual GPU “shader cores” tends to have sufficient row access locality to maximize DRAM efficiency in most applications without significant reordering. However, the interconnection network across which memory requests are sent from the shader cores to the DRAM controller tends to finely interleave the numerous memory request streams in a way that destroys the row access locality of the resultant stream seen at the DRAM controller. To address this, we employ an interconnection network arbitration scheme that preserves the row access locality of individual memory request streams and, in doing so, achieves DRAM efficiency and system performance close to that achievable by using out-of-order memory request scheduling while doing so with a simpler design. We evaluate our interconnection network arbitration scheme using crossbar, mesh, and ring networks for a baseline architecture of 8 memory channels, each controlled by its own DRAM controller and 28 shader cores (224 ALUs), supporting up to 1,792 in-flight memory requests. Our results show that our interconnect arbitration scheme coupled with a banked FIFO in-order scheduler obtains up to 91% of the performance obtainable with an out-of-order memory scheduler for a crossbar network with eight-entry DRAM controller queues.
November 28, 2010 by hgpu