Exploiting Memory Access Patterns to Improve Memory Performance in Data-Parallel Architectures

Byunghyun Jang, Dana Schaa, Perhaad Mistry, David Kaeli
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
IEEE Transactions on Parallel and Distributed Systems, January 2011 (vol. 22 no. 1), pp. 105-118


   title={Exploiting memory access patterns to improve memory performance in data-parallel architectures},

   author={Jang, B. and Schaa, D. and Mistry, P. and Kaeli, D.},

   journal={IEEE Transactions on Parallel and Distributed Systems},



   publisher={Published by the IEEE Computer Society}


Download Download (PDF)   View View   Source Source   



The introduction of General-Purpose computation on GPUs (GPGPUs) has changed the landscape for the future of parallel computing. At the core of this phenomenon are massively multithreaded, data-parallel architectures possessing impressive acceleration ratings, offering low-cost supercomputing together with attractive power budgets. Even given the numerous benefits provided by GPGPUs, there remain a number of barriers that delay wider adoption of these architectures. One major issue is the heterogeneous and distributed nature of the memory subsystem commonly found on data-parallel architectures. Application acceleration is highly dependent on being able to utilize the memory subsystem effectively so that all execution units remain busy. In this paper, we present techniques for enhancing the memory efficiency of applications on data-parallel architectures, based on the analysis and characterization of memory access patterns in loop bodies; we target vectorization via data transformation to benefit vector-based architectures (e.g., AMD GPUs) and algorithmic memory selection for scalar-based architectures (e.g., NVIDIA GPUs). We demonstrate the effectiveness of our proposed methods with kernels from a wide range of benchmark suites. For the benchmark kernels studied, we achieve consistent and significant performance improvements (up to 11.4x and 13.5x over baseline GPU implementations on each platform, respectively) by applying our proposed methodology.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2023 hgpu.org

All rights belong to the respective authors

Contact us: