A GPU-based Simulation for Stochastic Computing
Department of Electrical and Computer Engineering, University of Minnesota, Twin Cities, Minneapolis, MN, 55455
2nd International Workshop on GPUs and Scientific Applications (GPUScA), 2011
@inproceedings{li2011gpu,
title={A GPU-based Simulation for Stochastic Computing},
author={Li, P. and Xiao, W. and Lilja, D.J.},
booktitle={2nd International Workshop on GPUs and Scientific Applications (GPUScA 2011)},
pages={15},
year={2011}
}
Stochastic computing performs operations using streams of bits that represent probability values instead of deterministic values. An important benefit of stochastic computing is that it can tolerate a large number of failures in a noisy system. Additionally, for the VLSI implementation of a sophisticated algorithm, a stochastic implementation can consume much less hardware with lower power compared to a deterministic implementation with comparable performance. However, the simulation of the stochastic implementation is extremely time consuming when it is run on a conventional processor. This is because a probabilistic value is represented by a long bit stream (for example, 8192 bits) in stochastic computing. The simulation time of the stochastic implementation is normally hundreds or thousands of times slower than the deterministic implementation for the same algorithm. In this paper, we propose a GPU-based solution which can greatly speed up the simulation of stochastic computing. To validate our GPU-based simulation, we use the stochastic implementation of the frame difference-based image segmentation algorithm as an example to conduct extensive experiments. Measured results show that our GPUbased simulation can achieve up to 119 times performance speedup compared to the CPU-based simulation.
November 27, 2011 by hgpu