7873

Performance models for CUDA streams on NVIDIA GeForce series

Juan Gomez-Luna, Jose Maria Gonzalez-Linares, Jose Ignacio Benavides, Nicolas Guil
Dept. of Computer Architecture and Electronics, University of Cordoba
University of Cordoba, 2012

@article{gomez2012performance,

   title={Performance models for CUDA streams on NVIDIA GeForce series},

   author={G{‘o}mez-Luna, J. and Gonz{‘a}lez-Linares, J.M. and Benavides, J.I. and Guil, N.},

   year={2012}

}

Download Download (PDF)   View View   Source Source   

1025

views

Graphics Processing Units (GPU) have impressively arisen as generalpurpose coprocessors in high performance computing applications, since the launch of the Compute Unified Device Architecture (CUDA). However, they present an inherent performance bottleneck in the fact that communication between two separate address spaces (the main memory of the CPU and the memory of the GPU) is unavoidable. CUDA Application Programming Interface (API) provides asynchronous transfers and streams, which permit a staged execution, as a way to overlap communication and computation. Nevertheless, it does not exist a precise manner to estimate the possible improvement due to overlapping, neither a rule to determine the optimal number of stages or streams in which computation should be divided. In this work, we present a methodology that is applied to model the performance of asynchronous data transfers of CUDA streams on different GPU architectures. Thus, we illustrate this methodology by deriving expressions of performance for two different consumer graphic architectures belonging to the more recent generations. These models permit to estimate the optimal number of streams in which the computation on the GPU should be broken up, in order to obtain the highest performance improvements. Finally, we have successfully checked the suitability of our performance models on several NVIDIA devices belonging to GeForce 8, 9, 200, 400 and 500 series.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: