16911

Decoding with Finite-State Transducers on GPUs

Arturo Argueta, David Chiang
Department of Computer Science and Engineering, University of Notre Dame
arXiv:1701.03038 [cs.CL], (11 Jan 2017)

@article{argueta2017decoding,

   title={Decoding with Finite-State Transducers on GPUs},

   author={Argueta, Arturo and Chiang, David},

   year={2017},

   month={jan},

   archivePrefix={"arXiv"},

   primaryClass={cs.CL}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

462

views

Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers have implemented GPU versions of basic graph algorithms, limited previous work, to our knowledge, has been done on GPU algorithms for weighted finite automata. We introduce a GPU implementation of the Viterbi and forward-backward algorithm, achieving decoding speedups of up to 5.2x over our serial implementation running on different computer architectures and 6093x over OpenFST.
VN:F [1.9.22_1171]
Rating: 3.0/5 (2 votes cast)
Decoding with Finite-State Transducers on GPUs, 3.0 out of 5 based on 2 ratings

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: