Fully Parallel Particle Learning for GPGPUs and Other Parallel Devices

Kenichiro McAlinn, Hiroaki Katsura, Teruo Nakatsuma
Graduate School of Economics, Keio University, 2-15-45 Mita, Minato-ku, Tokyo, Japan
arXiv:1212.1639 [stat.CO] (7 Dec 2012)


   author={McAlinn}, K. and {Katsura}, H. and {Nakatsuma}, T.},

   title={"{Fully Parallel Particle Learning for GPGPUs and Other Parallel Devices}"},

   journal={ArXiv e-prints},




   keywords={Statistics – Computation},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   



We developed a novel parallel algorithm for particle filtering (and learning) which is specifically designed for GPUs (graphics processing units) or similar parallel computing devices. In our new algorithm, a full cycle of particle filtering (computing the value of the likelihood for each particle, constructing the cumulative distribution function (CDF) for resampling, resampling the particles with the CDF, and propagating new particles for the next cycle) can be executed in a massively parallel manner. One of the advantages of our algorithm is that every single numerical computation or memory access related to the particle filtering is executed solely inside the GPU, and no data transfer between the GPU’s device memory and the CPU’s host memory occurs unless it is under the absolute necessity of moving generated particles into the host memory for further data processing, so that it can circumvent the limited memory bandwidth between the GPU and the CPU. To demonstrate the advantage of our parallel algorithm, we conducted a Monte Carlo experiment in which we applied the parallel algorithm as well as conventional sequential algorithms for estimation of a simple state space model via particle learning, and compared them in terms of execution time. The results showed that the parallel algorithm was far superior to the sequential algorithm.
No votes yet.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: