1559

Parallel latent semantic analysis using a graphics processing unit

Joseph M. Cavanagh, Thomas E. Potok, Xiaohui Cui
Division of Science and Mathematics, University of Minnesota – Morris, Morris, Minnesota 56267
In GECCO ’09: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference (2009), pp. 2505-2510.

@conference{cavanagh2009parallel,

   title={Parallel latent semantic analysis using a graphics processing unit},

   author={Cavanagh, J.M. and Potok, T.E. and Cui, X.},

   booktitle={Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers},

   pages={2505–2510},

   year={2009},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

1494

views

Latent Semantic Analysis (LSA) can be used to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. In this paper, we presented a parallel LSA implementation on the GPU, using NVIDIA R Compute Unified Device Architecture (CUDA) and Compute Unified Basic Linear Algebra Subprograms (CUBLAS). The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. For large matrices that have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: