The Tradeoffs of Fused Memory Hierarchies in Heterogeneous Computing Architectures
Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831
9th conference on Computing Frontiers (CF ’12), 2012
@inproceedings{spafford2012tradeoffs,
title={The tradeoffs of fused memory hierarchies in heterogeneous computing architectures},
author={Spafford, K.L. and Meredith, J.S. and Lee, S. and Li, D. and Roth, P.C. and Vetter, J.S.},
booktitle={Proceedings of the 9th conference on Computing Frontiers},
pages={103–112},
year={2012},
organization={ACM}
}
With the rise of general purpose computing on graphics processing units (GPGPU), the influence from consumer markets can now be seen across the spectrum of computer architectures. In fact, many of the high-ranking Top500 HPC systems now include these accelerators. Traditionally, GPUs have connected to the CPU via the PCIe bus, which has proved to be a significant bottleneck for scalable scientific applications. Now, a trend toward tighter integration between CPU and GPU has removed this bottleneck and unified the memory hierarchy for both CPU and GPU cores. We examine the impact of this trend for high performance scientific computing by investigating AMD’s new Fusion Accelerated Processing Unit (APU) as a testbed. In particular, we evaluate the tradeoffs in performance, power consumption, and programmability when comparing this unified memory hierarchy with similar, but discrete GPUs.
June 6, 2012 by hgpu