Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks

Leszek Slazynski, Sander Bohte
Department of Life Sciences, Centrum Wiskunde & Informatica, Science Park 123, NL-1098XG Amsterdam, NL
Computation in Neural Systems, 2012


   title={Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks},

   author={Slazynski, L. and Bohte, S.},



Download Download (PDF)   View View   Source Source   



The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU’s architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50,000 neurons, processing over 35 million spiking events per second.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: