10521

Exploring Multiple Dimensions of Parallelism in Junction Tree Message Passing

Lu Zheng, Ole J. Mengshoel
Electrical and Computer Engineering, Carnegie Mellon University
UAI Application Workshops: Big Data meet Complex Models and Models for Spatial, Temporal and Network Data co-located with Uncertainty in Artificial Intelligence (UAI 2013), 2013

@inproceedings{zheng2013exploring,

   title={Exploring Multiple Dimensions of Parallelism in Junction Tree Message Passing},

   author={Zheng, Lu and Mengshoel, Ole J},

   booktitle={Intelligence Application Workshops: Part I: Big Data meet Complex Models},

   pages={87},

   year={2013}

}

Download Download (PDF)   View View   Source Source   

1452

views

Belief propagation over junction trees is known to be computationally challenging in the general case. One way of addressing this computational challenge is to use node-level parallel computing, and parallelize the computation associated with each separator potential table cell. However, this approach is not efficient for junction trees that mainly contain small separators. In this paper, we analyze this problem, and address it by studying a new dimension of node-level parallelism, namely arithmetic parallelism. In addition, on the graph level, we use a clique merging technique to further adapt junction trees to parallel computing platforms. We apply our parallel approach to both marginal and most probable explanation (MPE) inference in junction trees. In experiments with a Graphics Processing Unit (GPU), we obtain for marginal inference an average speedup of 5.54x and a maximum speedup of 11.94x; speedups for MPE inference are similar.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: