Efficient Matrix Factorization on Heterogeneous CPU-GPU Systems
AAII, University of Technology Sydney, Australia
arXiv:2006.15980 [cs.DC], (24 Jun 2020)
@misc{yu2020efficient,
title={Efficient Matrix Factorization on Heterogeneous CPU-GPU Systems},
author={Yuanhang Yu and Dong Wen and Ying Zhang and Xiaoyang Wang and Wenjie Zhang and Xuemin Lin},
year={2020},
eprint={2006.15980},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
Matrix Factorization (MF) has been widely applied in machine learning and data mining. A large number of algorithms have been studied to factorize matrices. Among them, stochastic gradient descent (SGD) is a commonly used method. Heterogeneous systems with multi-core CPUs and GPUs have become more and more promising recently due to the prevalence of GPUs in general-purpose data-parallel applications. Due to the large computational cost of MF, we aim to improve the efficiency of SGD-based MF computation by utilizing the massive parallel processing power of heterogeneous multiprocessors. The main challenge in parallel SGD algorithms on heterogeneous CPU-GPU systems lies in the granularity of the matrix division and the strategy to assign tasks. We design a novel strategy to divide the matrix into a set of blocks by considering two aspects. First, we observe that the matrix should be divided nonuniformly, and relatively large blocks should be assigned to GPUs to saturate the computing power of GPUs. In addition to exploiting the characteristics of hardware, the workloads assigned to two types of hardware should be balanced. Aiming at the final division strategy, we design a cost model tailored for our problem to accurately estimate the performance of hardware on different data sizes. A dynamic scheduling policy is also used to further balance workloads in practice. Extensive experiments show that our proposed algorithm achieves high efficiency with a high quality of training quality.
July 5, 2020 by hgpu