GPU implementation of motion estimation for visual saliency
Gipsa-lab, 961 rue de la Houille Blanche, BP 46, 38402 Grenoble Cedex, France
Conference on Design and Architectures for Signal and Image Processing (DASIP), 2010
@article{rahman2010gpu,
title={GPU implementation of motion estimation for visual saliency},
author={Rahman, A. and Houzet, D. and Pellerin, D. and Agud, L.},
year={2010}
}
Visual attention is a complex concept that includes many processes to find the region of concentration in a visual scene. In this paper, we discuss a spatio-temporal visual saliency model where the visual information contained in videos is divided into two types: static and dynamic that are processed by two separate pathways. These pathways produce intermediate saliency maps that are merged together to get salient regions distinct from what surround them. Evidently, to realize a more robust model will involve inclusion of more complex processes. Likewise, the dynamic pathway of the model involves compute-intensive motion estimation, that when implemented on GPU resulted in a speedup of up to 40x against its sequential counterpart. The implementation involves a number of code and memory optimizations to get the performance gains, resultantly materializing real-time video analysis capability for the visual saliency model.
April 29, 2011 by hgpu