Exploring Multi-level Parallelism for Large-Scale Spiking Neural Networks

Vivek K. Pallipuram, Melissa C. Smith, Nimisha Raut, Xiaoyu Ren
Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634, USA
The 2012 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’12), 2012


   title={Exploring Multi-level Parallelism for Large-Scale Spiking Neural Networks},

   author={Pallipuram, V.K. and Smith, M.C. and Raut, N. and Ren, X.},



Download Download (PDF)   View View   Source Source   



Several biologically inspired applications have been motivated by Spiking Neural Networks (SNNs) such as the Hodgkin-Huxley (HH) and Izhikevich models, owing to their high biological accuracy. The inherent massively parallel nature of the SNN simulations makes them a good fit for heterogeneous computing resources such as the General Purpose Graphical Processing Unit (GPGPU) clusters. In this research, we explore multi-level parallelism offered by heterogeneous computing resources for largescale SNN simulations. These simulations were performed using a two-level character recognition network based on the aforementioned SNN models on NCSA’s Forge GPGPU cluster. Our multi-node GPGPU implementation distributes the computations to either CPU or GPGPU based on task classification and utilizes all the available multi-level parallelism offered to ensure maximum heterogeneous resource utilization. Our multinode GPGPU implementation scales up to 200 million neurons for the two-level network and achieves a speedup of 355x over an equivalent MPI-only implementation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: