16654

Efficient Random Sampling – Parallel, Vectorized, Cache-Efficient, and Online

Peter Sanders, Sebastian Lamm, Lorenz Hubschle-Schneider, Emanuel Schrade, Carsten Dachsbacher
Karlsruhe Institute of Technology, Karlsruhe, Germany
arXiv:1610.05141 [cs.DS], (17 Oct 2016)

@article{sanders2016efficient,

   title={Efficient Random Sampling – Parallel, Vectorized, Cache-Efficient, and Online},

   author={Sanders, Peter and Lamm, Sebastian and Hubschle-Schneider, Lorenz and Schrade, Emanuel and Dachsbacher, Carsten},

   year={2016},

   month={oct},

   archivePrefix={"arXiv"},

   primaryClass={cs.DS}

}

We consider the problem of sampling $n$ numbers from the range ${1,ldots,N}$ without replacement on modern architectures. The main result is a simple divide-and-conquer scheme that makes sequential algorithms more cache efficient and leads to a parallel algorithm running in expected time $mathcal{O}left(n/p+log pright)$ on $p$ processors. The amount of communication between the processors is very small and independent of the sample size. We also discuss modifications needed for load balancing, reservoir sampling, online sampling, sampling with replacement, Bernoulli sampling, and vectorization on SIMD units or GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: