fMRI analysis on the GPU-possibilities and challenges
Linkoping University
Computer Methods and Programs in Biomedicine, Volume 105, Issue 2, Pages 145-161, 2012
@article{eklund2011fmri,
title={fMRI analysis on the GPU–Possibilities and challenges},
author={Eklund, A. and Andersson, M. and Knutsson, H.},
journal={Computer Methods and Programs in Biomedicine},
year={2011},
publisher={Elsevier}
}
Functional magnetic resonance imaging (fMRI) makes it possible to non-invasively measure brain activity with high spatial resolution. There are however a number of issues that have to be addressed. One is the large amount of spatio-temporal data that needs to be processed. In addition to the statistical analysis itself, several preprocessing steps, such as slice timing correction and motion compensation, are normally applied. The high computational power of modern graphic cards has already successfully been used for MRI and fMRI. Going beyond the first published demonstration of GPU-based analysis of fMRI data, all the preprocessing steps and two statistical approaches, the general linear model (GLM) and canonical correlation analysis (CCA), have been implemented on a GPU. For an fMRI dataset of typical size (80 volumes with 64x64x22voxels), all the preprocessing takes about 0.5s on the GPU, compared to 5s with an optimized CPU implementation and 120s with the commonly used statistical parametric mapping (SPM) software. A random permutation test with 10,000 permutations, with smoothing in each permutation, takes about 50s if three GPUs are used, compared to 0.5-2.5h with an optimized CPU implementation. The presented work will save time for researchers and clinicians in their daily work and enables the use of more advanced analysis, such as non-parametric statistics, both for conventional fMRI and for real-time fMRI.
July 6, 2012 by hgpu