Enabling and Scaling Matrix Computations on Heterogeneous Multi-Core and Multi-GPU Systems
EECS Department, University of Tennessee, Knoxville, TN, USA
The 26th ACM International Conference on Supercomputing (ICS 2012), 2012
@article{song2012enabling,
title={Enabling and Scaling Matrix Computations on Heterogeneous Multi-Core and Multi-GPU Systems},
author={Song, F. and Tomov, S. and Dongarra, J.},
year={2012}
}
We present a new approach to utilizing all CPU cores and all GPUs on heterogeneous multicore and multi-GPU systems to support dense matrix computations efficiently. The main idea is that we treat a heterogeneous system as a distributed-memory machine, and use a heterogeneous multi-level block cyclic distribution method to allocate data to the host and multiple GPUs to minimize communication. We design heterogeneous algorithms with hybrid tiles to accommodate the processor heterogeneity, and introduce an auto-tuning method to determine the hybrid tile sizes to attain both high performance and load balancing. We have also implemented a new runtime system and applied it to the Cholesky and QR factorizations. Our approach is designed for achieving four objectives: a high degree of parallelism, minimized synchronization, minimized communication, and load balancing. Our experiments on a compute node (with two Intel Westmere hexa-core CPUs and three Nvidia Fermi GPUs), as well as on up to 100 compute nodes on the Keeneland system [31], demonstrate great scalability, good load balancing, and efficiency of our approach.
March 28, 2012 by hgpu