6592

Memory-level and Thread-level Parallelism Aware GPU Architecture Performance Analytical Model

Sunpyo Hong, Hyesoon Kim
ECE School of Computer Science, Georgia Institute of Technology
36th annual international symposium on Computer architecture (ISCA ’09), 2009

@article{Hong:2009:AMG:1555815.1555775,

   author={Hong, Sunpyo and Kim, Hyesoon},

   title={An analytical model for a GPU architecture with memory-level and thread-level parallelism awareness},

   journal={SIGARCH Comput. Archit. News},

   volume={37},

   issue={3},

   month={June},

   year={2009},

   issn={0163-5964},

   pages={152–163},

   numpages={12},

   url={http://doi.acm.org/10.1145/1555815.1555775},

   doi={http://doi.acm.org/10.1145/1555815.1555775},

   acmid={1555775},

   publisher={ACM},

   address={New York, NY, USA},

   keywords={GPU architecture, analytical model, cuda, memory level parallelism, performance estimation, warp level parallelism}

}

Download Download (PDF)   View View   Source Source   

844

views

GPU architectures are increasingly important in the multi-core era due to their high number of parallel processors. Programming thousands of massively parallel threads is a big challenge for software engineers, but understanding the performance bottlenecks of those parallel programs on GPU architectures to improve application performance is even more dif?cult. Current approaches rely on programmers to tune their applications by exploiting the design space exhaustively without fully understanding the performance characteristics of their applications. To provide insights into the performance bottlenecks of parallel applications on GPU architectures, we propose a simple analytical model that estimates the execution time of massively parallel programs. The key component of our model is estimating the number of parallel memory requests (we call this the memory warp parallelism) by considering the number of running threads and memory bandwidth. Based on the degree of memory warp parallelism, the model estimates the cost of memory requests, thereby estimating the overall execution time of a program. Comparisons between the outcome of the model and the actual execution time in several GPUs show that the geometric mean of absolute error of our model on micro-benchmarks is 5.4% and on GPU computing applications is 13.3%. All the applications are written in the CUDA programming language.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: