GPU implementation of motion estimation for visual saliency

Anis Rahman, Dominique Houzet, Denis Pellerin, Lionel Agud
Gipsa-lab, 961 rue de la Houille Blanche, BP 46, 38402 Grenoble Cedex, France
Conference on Design and Architectures for Signal and Image Processing (DASIP), 2010


   title={GPU implementation of motion estimation for visual saliency},

   author={Rahman, A. and Houzet, D. and Pellerin, D. and Agud, L.},



Download Download (PDF)   View View   Source Source   



Visual attention is a complex concept that includes many processes to find the region of concentration in a visual scene. In this paper, we discuss a spatio-temporal visual saliency model where the visual information contained in videos is divided into two types: static and dynamic that are processed by two separate pathways. These pathways produce intermediate saliency maps that are merged together to get salient regions distinct from what surround them. Evidently, to realize a more robust model will involve inclusion of more complex processes. Likewise, the dynamic pathway of the model involves compute-intensive motion estimation, that when implemented on GPU resulted in a speedup of up to 40x against its sequential counterpart. The implementation involves a number of code and memory optimizations to get the performance gains, resultantly materializing real-time video analysis capability for the visual saliency model.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: