Speeding up LIP-Canny with CUDA programming
Department of Computer Architecture, Electronics and Electronic Technology, University of Cordoba
XXIII Jornadas de paralelismo, 2012
@article{palomar2012speeding,
title={Speeding up LIP-Canny with CUDA programming},
author={Palomar, R. and Palomares, J.M. and Olivares, J. and Castillo, J.M. and G{‘o}mez-Luna, J.},
year={2012}
}
The LIP-Canny algorithm outperforms traditional Canny edge detection in terms of edge detection under varying illumination. This method is based on a robust mathematical model (LIP paradigm), which is closer to the human vision system. However, this model requires more computations and more complex operations than the traditional paradigm. Non-parallel implementations of LIP-Canny do not fit Real-Time requirements because of the large amount of operations required. NVIDIA CUDA is a platform which enables the parallelization of this algorithm, obtaining very high performance. In this work, a comparison results between the non-parallel implementation (written in C/C++) and the NVIDIA CUDA one.
July 28, 2012 by hgpu