Exploring Multiple Dimensions of Parallelism in Junction Tree Message Passing
Electrical and Computer Engineering, Carnegie Mellon University
UAI Application Workshops: Big Data meet Complex Models and Models for Spatial, Temporal and Network Data co-located with Uncertainty in Artificial Intelligence (UAI 2013), 2013
@inproceedings{zheng2013exploring,
title={Exploring Multiple Dimensions of Parallelism in Junction Tree Message Passing},
author={Zheng, Lu and Mengshoel, Ole J},
booktitle={Intelligence Application Workshops: Part I: Big Data meet Complex Models},
pages={87},
year={2013}
}
Belief propagation over junction trees is known to be computationally challenging in the general case. One way of addressing this computational challenge is to use node-level parallel computing, and parallelize the computation associated with each separator potential table cell. However, this approach is not efficient for junction trees that mainly contain small separators. In this paper, we analyze this problem, and address it by studying a new dimension of node-level parallelism, namely arithmetic parallelism. In addition, on the graph level, we use a clique merging technique to further adapt junction trees to parallel computing platforms. We apply our parallel approach to both marginal and most probable explanation (MPE) inference in junction trees. In experiments with a Graphics Processing Unit (GPU), we obtain for marginal inference an average speedup of 5.54x and a maximum speedup of 11.94x; speedups for MPE inference are similar.
September 13, 2013 by hgpu