Parallel Algorithms for GPU accelerated Probabilistic Inference
Department of Computer Science, Artificial Intelligence Group, TU Dortmund University, Dortmund, 44227
Workshop on Big Learning (NIPS2011), 2011
@article{piatkowski2011parallel,
title={Parallel Algorithms for GPU accelerated Probabilistic Inference},
author={Piatkowski, N.},
year={2011}
}
Real world data is likely to contain an inherent structure. Those structures may be represented with graphs which encode independence assumptions within the data. Performing inference in those models is nearly intractable on mobile devices or casual workstations. This work introduces and compares two approaches for accelerating the inference in graphical models by using GPUs as parallel processing units. It is empirically showed, that in order to achieve a scaleable parallel algorithm, one has to distribute the workload equally among all processing units of a GPU. We accomplished this by introducing Thread-Cooperative message computations.
December 27, 2011 by hgpu