7483

Fast Universal Background Model (UBM) Training on GPUs using Compute Unified Device Architecture (CUDA)

M. Azhari, C. Ergun
Computer Engineering Department, Eastern Mediterranean University>
International Journal of Electrical & Computer Sciences (IJECS), Vol: 11, Issue: 04, 2011

@article{azhari2011fast,

   title={Fast Universal Background Model (UBM) Training on GPUs using Compute Unified Device Architecture (CUDA)},

   author={Azhari, M. and Ergun, C.},

   year={2011}

}

Download Download (PDF)   View View   Source Source   

809

views

Universal Background Modeling (UBM) is an alternative hypothesized modeling that is used extensively in Speaker Verification (SV) systems. Training the background models from large speech data requires a significant amount of memory and computational load. In this paper a parallel implementation of speaker verification system based on Gaussian Mixture Modeling – Universal Background Modeling (GMM – UBM) designed for many-core architecture of NVIDIA’s Graphics Processing Units (GPU) using CUDA single instruction multiple threads (SIMT) model is presented. CUDA implementation of these algorithms is designed in such a way that the speed of computation of the algorithm increases with number of GPU cores. In this experiment 30 times speedup for k-means clustering and 16 times speedup for Expectation Maximization (EM) was achieved for an input of about 350K frames of 16 dimensions and 1024 mixtures on GeForce GTX 570 (NVIDIA Fermi Series) with 480 cores when compared to a single threaded implementation on the traditional CPU.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: