2468

Speeding up Mutual Information Computation Using NVIDIA CUDA Hardware

Ramtin Shams, Nick Barnes
Research School of Information Sciences and Engineering (RSISE), The Australian National University (ANU), Canberra, ACT 0200
9th Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications, 2008

@conference{shams2008speeding,

   title={Speeding up mutual information computation using NVIDIA CUDA hardware},

   author={Shams, R. and Barnes, N.},

   booktitle={Digital Image Computing Techniques and Applications, 9th Biennial Conference of the Australian Pattern Recognition Society on},

   pages={555–560},

   isbn={0769530672},

   year={2008},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

1333

views

We present an efficient method for mutual information (MI) computation between images (2D or 3D) for NVIDIA’s “compute unified device architecture” (CUDA) compatible devices. Efficient parallelization of MI is particularly challenging on a “graphics processor unit” (GPU) due to the need for histogram-based calculation of joint and marginal probability mass functions (pmfs) with large number of bins. The data-dependent (unpredictable) nature of the updates to the histogram, together with hardware limitations of the GPU (lack of synchronization primitives and limited memory caching mechanisms) can make GPU-based computation inefficient. To overcome these limitation, we approximate the pmfs, using a down-sampled version of the joint- histogram which avoids memory update problems. Our CUDA implementation improves the efficiency of MI calculations by a factor of 25 compared to a standard CPU- based implementation and can be used in MI-based image registration applications.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: