18050

CuLDA_CGS: Solving Large-scale LDA Problems on GPUs

Xiaolong Xie, Yun Liang, Xiuhong Li, Wei Tan
Center for Energy-Efficient Computing and Applications, EECS, Peking University, Beijing, China
arXiv:1803.04631 [cs.DC], (13 Mar 2018)

@article{xie2018culdacgs,

   title={CuLDA_CGS: Solving Large-scale LDA Problems on GPUs},

   author={Xie, Xiaolong and Liang, Yun and Li, Xiuhong and Tan, Wei},

   year={2018},

   month={mar},

   archivePrefix={"arXiv"},

   primaryClass={cs.DC}

}

Latent Dirichlet Allocation(LDA) is a popular topic model. Given the fact that the input corpus of LDA algorithms consists of millions to billions of tokens, the LDA training process is very time-consuming, which may prevent the usage of LDA in many scenarios, e.g., online service. GPUs have benefited modern machine learning algorithms and big data analysis as they can provide high memory bandwidth and computation power. Therefore, many frameworks, e.g. TensorFlow, Caffe, CNTK, support to use GPUs for accelerating the popular machine learning data-intensive algorithms. However, we observe that LDA solutions on GPUs are not satisfying. In this paper, we present CuLDA_CGS, a GPU-based efficient and scalable approach to accelerate large-scale LDA problems. CuLDA_CGS is designed to efficiently solve LDA problems at high throughput. To it, we first delicately design workload partition and synchronization mechanism to exploit the benefits of multiple GPUs. Then, we offload the LDA sampling process to each individual GPU by optimizing from the sampling algorithm, parallelization, and data compression perspectives. Evaluations show that compared with state-of-the-art LDA solutions, CuLDA_CGS outperforms them by a large margin (up to 7.3X) on a single GPU. CuLDA_CGS is able to achieve extra 3.0X speedup on 4 GPUs. The source code is publicly available on this https URL CuLDA_CGS.
Rating: 1.5/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: