Message passing for GPGPU clusters: CudaMPI
Department of Computer Science, University of Alaska Fairbanks, 202 Chapman Building, Fairbanks, Alaska 99712, USA
IEEE International Conference on Cluster Computing and Workshops, 2009. CLUSTER ’09
@conference{lawlor2009message,
title={Message passing for GPGPU clusters: CudaMPI},
author={Lawlor, O.S.},
booktitle={Cluster Computing and Workshops, 2009. CLUSTER’09. IEEE International Conference on},
pages={1–8},
issn={1552-5244},
year={2009},
organization={IEEE}
}
We present and analyze two new communication libraries, cudaMPI and glMPI, that provide an MPI-like message passing interface to communicate data stored on the graphics cards of a distributed-memory parallel computer. These libraries can help applications that perform general purpose computations on these networked GPU clusters. We explore how to efficiently support both point-to-point and collective communication for either contiguous or noncontiguous data on modern graphics cards. Our software design is informed by a detailed analysis of the actual performance of modern graphics hardware, for which we develop and test a simple but useful performance model.
January 18, 2011 by hgpu