13794

Accelerating complex brain-model simulations on GPU platforms

H.A. Du Nguyen, Zaid Al-Ars, Georgios Smaragdos, Christos Strydis
Laboratory of Computer Engineering, Faculty of EE, Mathematics and CS, Delft University of Technology, Delft, The Netherlands
18th Design, Automation & Test in Europe conference, 2015
BibTeX

Download Download (PDF)   View View   Source Source   

1953

views

The Inferior Olive (IO) in the brain, in conjunction with the cerebellum, is responsible for crucial sensorimotor-integration functions in humans. In this paper, we simulate a computationally challenging IO neuron model consisting of three compartments per neuron in a network arrangement on GPU platforms. Several GPU platforms of the two latest NVIDIA GPU architectures (Fermi, Kepler) have been used to simulate large-scale IO-neuron networks. These networks have been ported on 4 diverse GPU platforms and implementation has been optimized, scoring 3x speedups compared to its unoptimized version. The effect of GPU L1-cache and thread block size as well as the impact of numerical precision of the application on performance have been evaluated and best configurations have been chosen. In effect, a maximum speedup of 160x has been achieved with respect to a reference CPU platform.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hpgu.org