Performance analysis of memory transfers and GEMM subroutines on NVIDIA Tesla GPU cluster

Veerendra Allada, Troy Benjegerdes, Brett Bode
Electrical and Computer Engineering, Ames Laboratory, Iowa State University
IEEE International Conference on Cluster Computing and Workshops, 2009. CLUSTER ’09, p.1-9


   title={Performance analysis of memory transfers and GEMM subroutines on NVIDIA Tesla GPU cluster},

   author={Allada, V. and Benjegerdes, T. and Bode, B.},

   booktitle={Cluster Computing and Workshops, 2009. CLUSTER’09. IEEE International Conference on},






Download Download (PDF)   View View   Source Source   



Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The graphical processing unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the basic linear algebra subroutines (BLAS), among which the general matrix multiply (GEMM) is considered as the workhorse subroutine. In this paper, we study the performance of the memory copies and GEMM subroutines that are crucial to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: