Exploring Multi-level Parallelism for Large-Scale Spiking Neural Networks
Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634, USA
The 2012 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA’12), 2012
@article{pallipuram2012exploring,
title={Exploring Multi-level Parallelism for Large-Scale Spiking Neural Networks},
author={Pallipuram, V.K. and Smith, M.C. and Raut, N. and Ren, X.},
year={2012}
}
Several biologically inspired applications have been motivated by Spiking Neural Networks (SNNs) such as the Hodgkin-Huxley (HH) and Izhikevich models, owing to their high biological accuracy. The inherent massively parallel nature of the SNN simulations makes them a good fit for heterogeneous computing resources such as the General Purpose Graphical Processing Unit (GPGPU) clusters. In this research, we explore multi-level parallelism offered by heterogeneous computing resources for largescale SNN simulations. These simulations were performed using a two-level character recognition network based on the aforementioned SNN models on NCSA’s Forge GPGPU cluster. Our multi-node GPGPU implementation distributes the computations to either CPU or GPGPU based on task classification and utilizes all the available multi-level parallelism offered to ensure maximum heterogeneous resource utilization. Our multinode GPGPU implementation scales up to 200 million neurons for the two-level network and achieves a speedup of 355x over an equivalent MPI-only implementation.
September 23, 2012 by hgpu