18237

clMF: A fine-grained and portable alternating least squares algorithm for parallel matrix factorization

Jing Chen, Jianbin Fanga, Weifeng Liub, Tao Tang, Canqun Yang
College of Computer, National University of Defense Technology, Changsha, China
Future Generation Computer Systems, 2018

@article{chen2018clmf,

   title={clMF: A fine-grained and portable alternating least squares algorithm for parallel matrix factorization},

   author={Chen, Jing and Fang, Jianbin and Liu, Weifeng and Tang, Tao and Yang, Canqun},

   journal={Future Generation Computer Systems},

   year={2018},

   publisher={Elsevier}

}

Alternating least squares (ALS) has been proved to be an effective solver for matrix factorization in recommender systems. To speed up factorizing performance, various parallel ALS solvers have been proposed to leverage modern multi-cores and many-cores. Existing implementations are limited in either speed or portability. In this paper, we present an efficient and portable ALS solver (clMF) for recommender systems. On one hand, we diagnose the baseline implementation and observe that it lacks of the awareness of the hierarchical thread organization on modern hardware. To achieve high performance, we apply the thread batching technique, the fine-grained tiling technique and three architecture-specific optimizations. On the other hand, we implement the ALS solver in OpenCL so that it can run on various platforms (CPUs, GPUs and MICs). Based on the architectural specifics, we select a suitable code variant for each platform to efficiently map it to the underlying hardware. The experimental results show that our implementation performs 2.8x-15.7x faster on an Intel 16-core CPU, 23.9x-87.9x faster on an NVIDIA K20C GPU and 34.6x-97.1x faster on an AMD Fury X GPU than the baseline implementation. On the K20C GPU, our implementation also outperforms cuMF over different latent features ranging from 10 to 100 with various real-world recommendation datasets.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: