Full Covariance Gaussian Mixture Models Evaluation on GPU

Jan Vanek, Jan Trmal, Josef V. Psutka, Josef Psutka
Department of Cybernetics, University of West Bohemia, Univerzitni 8, 306 14 Plzen, Czech Republic
IEEE International Symposium on Signal Processing and Information Technology, 2012


   author={Vanv{e}k J. and Trmal J. and Psutka J. V. and Psutka J.},

   title={Full Covariance Gaussian Mixture Models Evaluation on GPU},


   journal={IEEE International Symposium on Signal Processing and Information Technology},

   address={Vietnam, Ho Chi Minh City},




Download Download (PDF)   View View   Source Source   



Gaussian mixture models (GMMs) are often used in various data processing and classification tasks to model a continuous probability density in a multi-dimensional space. In cases, where the dimension of the feature space is relatively high (e.g. in the automatic speech recognition (ASR)), GMM with a higher number of Gaussians with diagonal covariances (DC) instead of full covariances (FC) is used from the two reasons. The first reason is a problem how to estimate robust FC matrices with a limited training data set. The second reason is a much higher computational cost during the GMM evaluation. The first reason was addressed in many recent publications. In contrast, this paper describes an efficient implementation on Graphic Processing Unit (GPU) of the FC-GMM evaluation, which addresses the second reason. The performance was tested on acoustic models for ASR, and it is shown that even a low-end laptop GPU is capable to evaluate a large acoustic model in a fraction of the real speech time. Three variants of the algorithm were implemented and compared on various GPUs: NVIDIA CUDA, NVIDIA OpenCL, and ATI/AMD OpenCL.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: