8777

Accelerating Topic Model Training on a Single Machine

Mian Lu, Ge Bai, Qiong Luo, Jie Tang, Jiuxin Zhao
A*STAR Institute of High Performance Computing, Singapore
Fifteenth International Asia-Pacific Web Conference (APWeb’13), 2013

@article{lu2013accelerating,

   title={Accelerating Topic Model Training on a Single Machine},

   author={Lu, M. and Bai, G. and Luo, Q. and Tang, J. and Zhao, J.},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1868

views

We present the design and implementation of GLDA, a library that utilizes the GPU (Graphics Processing Unit) to perform Gibbs sampling of Latent Dirichlet Allocation (LDA) on a single machine. LDA is an effective topic model used in many applications, e.g., classification, feature selection, and information retrieval. However, training an LDA model on large data sets takes hours, even days, due to the heavy computation and intensive memory access. Therefore, we explore the use of the GPU to accelerate LDA training on a single machine. Specifically, we propose three memory-efficient techniques to handle large data sets on the GPU: (1) generating document-topic counts as needed instead of storing all of them, (2) adopting a compact storage scheme for sparse matrices, and (3) partitioning word tokens. Through these techniques, the LDA training which would take 10 GB memory originally, can be performed on a commodity GPU card with only 1 GB GPU memory. Furthermore, our GLDA achieves a speedup of 15X over the original CPU-based LDA for large data sets.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: