Performance models for CPU-GPU data transfers
VU University Amsterdam
14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), Chicago, IL, May 2014
@inproceedings{werkhoven2014performance,
title={Performance models for CPU-GPU data transfers},
author={van Werkhoven, B. and Maassen, J. and Seinstra, F.J. and Bal, H.E.},
booktitle={14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID)},
pages={1–10},
year={2014},
organization={IEEE/ACM}
}
Many GPU applications perform data transfers to and from GPU memory at regular intervals. For example because the data does not fit into GPU memory or because of inter- node communication at the end of each time step. Overlapping GPU computation with CPU-GPU communication can reduce the costs of moving data. Several different techniques exist for transferring data to and from GPU memory and for overlapping those transfers with GPU computation. It is currently not known when to apply which method. Implementing and benchmarking each method is often a large programming effort and not feasible. To solve these issues and to provide insight in the performance of GPU applications, we propose an analytical performance model that includes PCIe transfers and overlapping computation and communication. Our evaluation shows that the performance models are capable of correctly classifying the relative performance of the different implementations.
June 5, 2014 by bennotsi