Some of the What?, Why?, How?, Who? and Where? of Graphics Processing Unit Computing for Bayesian Analysis

Marc A. Suchard, Chris Holmes, Mike West
Departments of Biomathematics, Human Genetics and Biostatistics, University of California, Los Angeles, CA 90095
ISBA Bulletin (March 2010, volume 17, issue 1)


   title={Some of the What?, Why?, How?, Who? and Where? of Graphics Processing Unit Computing for Bayesian Analysis},

   author={Suchard, M. and Holmes, C. and West, M.},

   journal={Bernardo, J. M. et al., OUP, editor, Bayesian Statistics},




Download Download (PDF)   View View   Source Source   



Over the last 20 years or so, a number of Bayesian researchers and groups have invested a good deal of time, effort and money in parallel computing for Bayesian analysis. The growth of “small research group” to “institutionally supported” cluster computational facilities has had a substantial impact on a number of areas of Bayesian analysis, enabling analyses that are otherwise practically infeasible. Parallel computing has also motivated new approaches to simulation and optimisation-based Bayesian computations that aim to maximally exploit the “master-slave” and “embarrassingly parallel” computational model [e.g., 3, 4, 6]. In more recent years, increasingly prevalent multi-core CPUs in standard servers, desktops and laptops have engendered some interest in relatively simple and easy multi-threading of existing Bayesian analysis code, whether implemented in low level languages (C/C++) or through parallelisation facilities in environments such as R and Matlab R. Much progress in research and in advancing the use of Bayesian methods in increasingly computationally challenging problems has resulted. As we look ahead, however, the potential impact of parallel computation on both immediate research and the development of more broadly useful software is clearly – to some of us – dramatically enhanced by the advent of scientific computation using desktop and laptop graphical processing units (GPUs). Major new opportunities for orders-of-magnitude speed-up in computation are emerging through GPU programming, and the technology is cheap, both to purchase and run, and easily available.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: