Mutual information computation and maximization using GPU
Computer Science Department, University of Southern California, 3737 Watt Way, PHE 101 Los Angeles, CA, 90089
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (2008) Publisher: Ieee, Pages: 1-6
@article{lin2008mutual,
title={Mutual information computation and maximization using GPU},
author={Lin, Y. and Medioni, G.},
year={2008},
publisher={IEEE}
}
We present a GPU implementation to compute both mutual information and its derivatives. Mutual information computation is a highly demanding process due to the enormous number of exponential computations. It is therefore the bottleneck in many image registration applications. However, we show that these computations are fully parallizable and can be efficiently ported onto the GPU architecture. Compared with the same CPU implementation running on a workstation level CPU, we reached a factor of 170 in computing mutual information, and a factor of 400 in computing its derivatives.
March 16, 2011 by hgpu