6179

Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units

Jian Li, Pavel Bloch, Jing Xu, Marinko V. Sarunic, Lesley Shannon
School of Engineering Sciences, Simon Fraser University, V5A 1S6 Burnaby BC, Canada
Applied Optics, Vol. 50, Issue 13, pp. 1832-1838, 2011

@article{li2011performance,

   title={Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units},

   author={Li, J. and Bloch, P. and Xu, J. and Sarunic, M.V. and Shannon, L.},

   journal={Applied Optics},

   volume={50},

   number={13},

   pages={1832–1838},

   year={2011},

   publisher={Optical Society of America}

}

Download Download (PDF)   View View   Source Source   

1450

views

Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform’s memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: