2128

Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units

Qianqian Fang, David A. Boas
Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th St, Charlestown, Massachusetts, 02129, USA
Opt. Express, Vol. 17, No. 22. (26 October 2009), pp. 20178-20190

@article{fang2009monte,

   title={Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units},

   author={Fang, Q. and Boas, D.A.},

   journal={Optics express},

   volume={17},

   number={22},

   pages={20178–20190},

   issn={1094-4087},

   year={2009},

   publisher={Optical Society of America}

}

We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: