11951

Efficient Acceleration of Mutual Information Computation for Nonrigid Registration using CUDA

Kei Ikeda, Fumihiko Ino, Kenichi Hagihara
Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
IEEE Journal of Biomedical and Health Informatics, 2014

@article{ikeda2014efficient,

   title={Efficient Acceleration of Mutual Information Computation for Nonrigid Registration using CUDA},

   author={Ikeda, Kei and Ino, Fumihiko and Hagihara, Kenichi},

   year={2014}

}

Download Download (PDF)   View View   Source Source   Source codes Source codes

Package:

1933

views

In this paper, we propose an efficient acceleration method for the nonrigid registration of multimodal images that uses a graphics processing unit (GPU). The key contribution of our method is efficient utilization of on-chip memory for both normalized mutual information (NMI) computation and hierarchical B-spline deformation, which compose a well-known registration algorithm. We implement this registration algorithm as a compute unified device architecture (CUDA) program with an efficient parallel scheme and several optimization techniques such as hierarchical data organization, data reuse, and multiresolution representation. We experimentally evaluate our method with four clinical datasets consisting of up to 512 512 296 voxels. We find that exploitation of onchip memory achieves a 12-fold increase in speed over an off-chip memory version and, therefore, it increases the efficiency of parallel execution from 4% to 46%. We also find that our method running on a GeForce GTX 580 card is approximately 14 times faster than a fully optimized CPU-based implementation running on four cores. Some multimodal registration results are also provided to understand the limitation of our method. We believe that our highly efficient method, which completes an alignment task within a few tens of second, will be useful to realize rapid nonrigid registration.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: