Accelerating Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation with Nvidia CUDA Compatible Devices
Nagasaki University, Bunkyo-machi 1-14, Nagasaki, Japan
In Next-Generation Applied Intelligence, Vol. 5579 (2009), pp. 491-500.
@article{masada2009accelerating,
title={Accelerating collapsed variational bayesian inference for latent Dirichlet allocation with Nvidia CUDA compatible devices},
author={Masada, T. and Hamada, T. and Shibata, Y. and Oguri, K.},
journal={Next-Generation Applied Intelligence},
pages={491–500},
year={2009},
publisher={Springer}
}
In this paper, we propose an acceleration of collapsed variational Bayesian (CVB) inference for latent Dirichlet allocation (LDA) by using Nvidia CUDA compatible devices. While LDA is an efficient Bayesian multi-topic document model, it requires complicated computations for parameter estimation in comparison with other simpler document models, e.g. probabilistic latent semantic indexing, etc. Therefore, we accelerate CVB inference, an efficient deterministic inference method for LDA, with Nvidia CUDA. In the evaluation experiments, we used a set of 50,000 documents and a set of 10,000 images. We could obtain inference results comparable to sequential CVB inference.
November 17, 2010 by hgpu