10391

A memory access model for highly-threaded many-core architectures

Lin Ma, Kunal Agrawal, Roger D. Chamberlain
Department of Computer Science and Engineering, Washington University in St. Louis, United States
Future Generation Computer Systems, 2013
@article{Ma2013,

   title={A memory access model for highly-threaded many-core architectures},

   journal={Future Generation Computer Systems},

   year={2013},

   issn={0167-739X},

   doi={http://dx.doi.org/10.1016/j.future.2013.06.020},

   url={http://www.sciencedirect.com/science/article/pii/S0167739X13001349},

   author={Lin Ma and Kunal Agrawal and Roger D. Chamberlain},

   keywords={PRAM, TMM, All Pairs Shortest Paths (APSP), Highly-threaded many-core, Memory access model}

}

Download Download (PDF)   View View   Source Source   

557

views

A number of highly-threaded, many-core architectures hide memory-access latency by low-overhead context switching among a large number of threads. The speedup of a program on these machines depends on how well the latency is hidden. If the number of threads were infinite, theoretically, these machines could provide the performance predicted by the PRAM analysis of these programs. However, the number of threads per processor is not infinite, and is constrained by both hardware and algorithmic limits. In this paper, we introduce the Threaded Many-core Memory (TMM) model which is meant to capture the important characteristics of these highly-threaded, many-core machines. Since we model some important machine parameters of these machines, we expect analysis under this model to provide a more fine-grained and accurate performance prediction than the PRAM analysis. We analyze 4 algorithms for the classic all pairs shortest paths problem under this model. We find that even when two algorithms have the same PRAM performance, our model predicts different performance for some settings of machine parameters. For example, for dense graphs, the dynamic programming algorithm and Johnson’s algorithm have the same performance in the PRAM model. However, our model predicts different performance for large enough memory-access latency and validates the intuition that the dynamic programming algorithm performs better on these machines. We validate several predictions made by our model using empirical measurements on an instantiation of a highly-threaded, many-core machine, namely the NVIDIA GTX 480.
VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
A memory access model for highly-threaded many-core architectures, 5.0 out of 5 based on 1 rating

* * *

* * *

Like us on Facebook

HGPU group

166 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1272 peoples are following HGPU @twitter

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: