12710

Bandwidth Requirements of GPU Architectures

Benjamin C. Johnstone
Rochester Institute of Technology
Rochester Institute of Technology, 2014

@article{johnstone2014bandwidth,

   title={Bandwidth Requirements of GPU Architectures},

   author={Johnstone, Benjamin C},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

1798

views

A new trend in chip multiprocessor (CMP) design is to incorporate graphics processing unit (GPU) cores, making them heterogeneous. GPU cores have a higher bandwidth requirement than CPU cores, as they tend to generate much more memory requests. In order to achieve good performance, there must be sufficient bandwidth between the GPU shader cores and main memory to service these memory requests in a timely manner. However, designing for the highest possible bandwidth will lead to high energy costs. The communication requirements of GPU cores must be determined in order to choose a proper interconnect. To this end, we have simulated several CUDA benchmarks with varying bandwidths using the GPGPU-Sim simulator. Our results show that the communication requirements of GPUs vary from workload to workload. We suggest that cores be connected using a photonic interconnect capable of supporting different bandwidths in order to reduce power consumption. For each transmission, the interconnect used will depend on how the bandwidth affects performance. We determined that the ratio of interconnect-shader stalls to the total number of execution cycles is a good indicator of whether or not an application will be bandwidth-sensitive. We used this finding to develop a bandwidth selection policy for GPU applications using a photonic NoC. With our policy selections, the photonic interconnect used 12.5% less power than a photonic interconnect with optimal performing choices, which only gave a performance improvement of 1.37% compared to our policy. The photonic interconnect with our policy also had the lowest energy-delay product out of the interconnects we compared it against.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: