16652

CuMF_SGD: Fast and Scalable Matrix Factorization

Xiaolong Xie, Wei Tan, Liana L. Fong, Yun Liang
Center for Energy-efficient Computing and Applications, School of EECS, Peking University, China
arXiv:1610.05838 [cs.LG], (20 Oct 2016)

@article{xie2016cumfsgd,

   title={CuMF_SGD: Fast and Scalable Matrix Factorization},

   author={Xie, Xiaolong and Tan, Wei and Fong, Liana L. and Liang, Yun},

   year={2016},

   month={oct},

   archivePrefix={"arXiv"},

   primaryClass={cs.LG}

}

Matrix factorization (MF) has been widely used in e.g., recommender systems, topic modeling and word embedding. Stochastic gradient descent (SGD) is popular in solving MF problems because it can deal with large data sets and is easy to do incremental learning. We observed that SGD for MF is memory bound. Meanwhile, single-node CPU systems with caching performs well only for small data sets; distributed systems have higher aggregated memory bandwidth but suffer from relatively slow network connection. This observation inspires us to accelerate MF by utilizing GPUs’s high memory bandwidth and fast intra-node connection. We present cuMF_SGD, a CUDA-based SGD solution for large-scale MF problems. On a single CPU, we design two workload schedule schemes, i.e., batch-Hogwild! and wavefront-update that fully exploit the massive amount of cores. Especially, batch-Hogwild! as a vectorized version of Hogwild! overcomes the issue of memory discontinuity. We also develop highly-optimized kernels for SGD update, leveraging cache, warp-shuffle instructions and half-precision floats. We also design a partition scheme to utilize multiple GPUs while addressing the well-known convergence issue when parallelizing SGD. On three data sets with only one Maxwell or Pascal GPU, cuMF_SGD runs 3.1X-28.2X as fast compared with state-of-art CPU solutions on 1-64 CPU nodes. Evaluations also show that cuMF_SGD scales well on multiple GPUs in large data sets.
VN:F [1.9.22_1171]
Rating: 3.0/5 (4 votes cast)
CuMF_SGD: Fast and Scalable Matrix Factorization, 3.0 out of 5 based on 4 ratings

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: