13586

Model-driven optimisation of memory hierarchy and multithreading on GPUs

Andrew A. Haigh, Eric C. McCreath
Research School of Computer Science, The Australian National University, Canberra, Australia
13th Australasian Symposium on Parallel and Distributed Computing (AusPDC 2015), 2015

@article{haigh2015model,

   title={Model-driven optimisation of memory hierarchy and multithreading on GPUs},

   author={Haigh, Andrew A and McCreath, Eric C},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

1638

views

Due to their potentially high peak performance and energy efficiency, GPUs are increasingly popular for scientific computations. However, the complexity of the architecture makes it difficult to write code that achieves high performance. Two of the most important factors in achieving high performance are the usage of the GPU memory hierarchy and the way in which work is mapped to threads and blocks. The dominant frameworks for GPU computing, CUDA and OpenCL, leave these decisions largely to the programmer. In this work, we address this in part by proposing a technique that simultaneously manages use of the GPU low-latency shared memory and chooses the granularity with which to divide the work (block size). We show that a relatively simple heuristic based on an abstraction of the GPU architecture is able to make these decisions and achieve average performance within 17% of an optimal configuration on an NVIDIA Tesla K20.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: