13794

Accelerating complex brain-model simulations on GPU platforms

H.A. Du Nguyen, Zaid Al-Ars, Georgios Smaragdos, Christos Strydis
Laboratory of Computer Engineering, Faculty of EE, Mathematics and CS, Delft University of Technology, Delft, The Netherlands
18th Design, Automation & Test in Europe conference, 2015

@inproceedings{Nguyen2015Accelerating,

   author={H.A. Du Nguyen and Z. Al-Ars},

   title={Accelerating complex brain-model simulations on GPU platforms},

   booktitle={Proc. 18th Design, Automation & Test in Europe conference},

   address={Grenoble, France},

   month={March},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

1751

views

The Inferior Olive (IO) in the brain, in conjunction with the cerebellum, is responsible for crucial sensorimotor-integration functions in humans. In this paper, we simulate a computationally challenging IO neuron model consisting of three compartments per neuron in a network arrangement on GPU platforms. Several GPU platforms of the two latest NVIDIA GPU architectures (Fermi, Kepler) have been used to simulate large-scale IO-neuron networks. These networks have been ported on 4 diverse GPU platforms and implementation has been optimized, scoring 3x speedups compared to its unoptimized version. The effect of GPU L1-cache and thread block size as well as the impact of numerical precision of the application on performance have been evaluated and best configurations have been chosen. In effect, a maximum speedup of 160x has been achieved with respect to a reference CPU platform.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: