10775

SIMD Parallel Gibbs Sampling of Probabilistic Directed Acyclic Graphs

Alireza S. Mahani, Mansour T.A. Sharabiani
Sentrana Inc., Washington DC, USA
arXiv:1310.1537 [stat.CO], (6 Oct 2013)
@ARTICLE{2013arXiv1310.1537M,

   author={Mahani}, A.~S. and {Sharabiani}, M.~T.~A.},

   title={"{SIMD Parallel Gibbs Sampling of Probabilistic Directed Acyclic Graphs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1310.1537},

   primaryClass={"stat.CO"},

   keywords={Statistics – Computation, Computer Science – Artificial Intelligence, Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2013},

   month={oct},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1310.1537M},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

338

views

We present a single-chain parallelization strategy for Gibbs sampling of probabilistic Directed Acyclic Graphs, where contributions from child nodes to the conditional posterior distribution of a given node are calculated concurrently. For statistical models with many independent observations, such parallelism takes a Single-Instruction-Multiple-Data form, and can be efficiently implemented using multicore parallelization and vector instructions on x86 processors. Since all tasks have near-identical durations in SIMD parallelism, multicore parallelization benefits from static scheduling to minimize thread synchronization overhead. For multi-socket servers, a compact processor affinity minimizes cross-chip communication during the reduction phase, leading to better scaling of performance with number of cores. Effective vectorization requires coherent memory access patterns, perhaps by converting an array of node structures into a structure of arrays. When calculating each child node’s contribution involves a loop, e.g. to calculate the inner product of the covariate and coefficient vectors, manual unrolling of this inner loop is necessary to facilitate vectorization of the outer loop. After these optimizations, we achieve nearly 10x speedup using only 4 cores of an Intel x86-64 processor with Advanced Vector Extensions, even for datasets of modest size. SIMD parallel Gibbs can be combined with parallel sampling of conditionally-independent nodes for nested parallel Gibbs sampling of Hierarchical Bayesian models. Our optimization techniques improve the scaling of performance with number of cores and width of vector units; thus paving the way for further speedup on highly-parallel, SIMD-oriented coprocessors such as Intel Xeon Phi and Graphic Processing Units.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Like us on Facebook

HGPU group

147 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1229 peoples are following HGPU @twitter

Featured events

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: