BrainFrame: A heterogeneous accelerator platform for neuron simulations
Neuroscience dept., Erasmus MC, Wytemaweg 80, 3015GE, Rotterdam, NL
arXiv:1612.01501 [cs.NE], (6 Dec 2016)
@article{smaragdos2016brainframe,
title={BrainFrame: A heterogeneous accelerator platform for neuron simulations},
author={Smaragdos, Georgios and Chatzikonstantis, Georgios and Kukreja, Rahul and Sidiropoulos, Harrys and Rodopoulos, Dimitrios and Sourdis, Ioannis and Al-Ars, Zaid and Kachris, Christoforos and Soudris, Dimitrios and Zeeuw, Chris I. De and Strydis, Christos},
year={2016},
month={dec},
archivePrefix={"arXiv"},
primaryClass={cs.NE}
}
OBJECTIVE: The advent of High-Performance Computing (HPC) in recent years has led to its increasing use in brain study through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a single acceleration (or homogeneous) platform to effectively address the complete array of modeling requirements. APPROACH: In this paper we propose and build BrainFrame, a heterogeneous acceleration platform, incorporating three distinct acceleration technologies, a Dataflow Engine, a Xeon Phi and a GP-GPU. The PyNN framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different instances of a state-of-the-art neuron model, modeling the Inferior- Olivary Nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal- network dimensions but also different network-connectivity circumstances that can drastically change application workload characteristics. MAIN RESULTS: The synthetic approach of three HPC technologies demonstrated that BrainFrame is better able to cope with the modeling diversity encountered. Our performance analysis shows clearly that the model directly affect performance and all three technologies are required to cope with all the model use cases.
December 10, 2016 by hgpu