12709

An Investigation of Unified Memory Access Performance in CUDA

Raphael Landaverde, Tiansheng Zhang, Ayse K. Coskun, Martin Herbordt
Electrical and Computer Engineering Department, Boston University, Boston, MA, USA
IEEE High Performance Extreme Computing Conference (HPEC), 2014

@article{landaverde2014investigation,

   title={An Investigation of Unified Memory Access Performance in CUDA},

   author={Landaverde, Raphael and Zhang, Tiansheng and Coskun, Ayse K and Herbordt, Martin},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

2059

views

Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: