Parallel implementation of a spatio-temporal visual saliency model
GIPSA-lab, Grenoble, France
Journal of Real-Time Image Processing (27 June 2010)
@article{rahmanparallel,
title={Parallel implementation of a spatio-temporal visual saliency model},
author={Rahman, A. and Houzet, D. and Pellerin, D. and Marat, S. and Guyader, N.},
journal={Journal of Real-Time Image Processing},
pages={1–12},
issn={1861-8200},
publisher={Springer}
}
The human vision has been studied deeply in the past years, and several different models have been proposed to simulate it on computer. Some of these models concerns visual saliency which is potentially very interesting in a lot of applications like robotics, image analysis, compression, video indexing. Unfortunately they are compute intensive with tight real-time requirements. Among all the existing models, we have chosen a spatio-temporal one combining static and dynamic information. We propose in this paper a very efficient implementation of this model with multi-GPU reaching real-time. We present the algorithms of the model as well as several parallel optimizations on GPU with higher precision and execution time results. The real-time execution of this multi-path model on multi-GPU makes it a powerful tool to facilitate many vision related applications.
November 7, 2010 by hgpu