3280

A comparison of CPUs, GPUs, FPGAs, and massively parallel processor arrays for random number generation

David Barrie Thomas, Lee Howes, Wayne Luk
Imperial College London
Proceeding of the ACM/SIGDA international symposium on Field programmable gate arrays, FPGA ’09

@conference{thomas2009comparison,

   title={A comparison of CPUs, GPUs, FPGAs, and massively parallel processor arrays for random number generation},

   author={Thomas, D.B. and Howes, L. and Luk, W.},

   booktitle={Proceeding of the ACM/SIGDA international symposium on Field programmable gate arrays},

   pages={63–72},

   year={2009},

   organization={ACM}

}

Download Download (PDF)   View View   Source Source   

1838

views

The future of high-performance computing is likely to rely on the ability to efficiently exploit huge amounts of parallelism. One way of taking advantage of this parallelism is to formulate problems as “embarrassingly parallel” Monte-Carlo simulations, which allow applications to achieve a linear speedup over multiple computational nodes, without requiring a super-linear increase in inter-node communication. However, such applications are reliant on a cheap supply of high quality random numbers, particularly for the three main maximum entropy distributions: uniform, used as a general source of randomness; Gaussian, for discrete-time simulations; and exponential, for discrete-event simulations. In this paper we look at four different types of platform: conventional multi-core CPUs (Intel Core2); GPUs (NVidia GTX 200); FPGAs (Xilinx Virtex-5); and Massively Parallel Processor Arrays (Ambric AM2000). For each platform we determine the most appropriate algorithm for generating each type of number, then calculate the peak generation rate and estimated power efficiency for each device.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: