8895

Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis

Konstantinos G. Derpanis, Mikhail Sizintsev, Kevin J. Cannons, Richard P. Wildes
Department of Computer Science and Engineering, York University, CSB 1003, 4700 Keele St., Toronto, Ontario M3J 1P3, Canada
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 35, 2013

@article{derpanis2013action,

   title={Action Spotting and Recognition Based on a Spatiotemporal Orientation Analysis},

   author={Derpanis, K. and Sizintsev, M. and Cannons, K. and Wildes, R.},

   year={2013},

   publisher={IEEE}

}

Download Download (PDF)   View View   Source Source   

965

views

This paper provides a unified framework for the interrelated topics of action spotting, the spatiotemporal detection and localization of human actions in video, and action recognition, the classification of a given video into one of several predefined categories. A novel compact local descriptor of video dynamics in the context of action spotting and recognition is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. Importantly, the descriptor allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences. The general approach presented for action spotting and recognition is amenable to efficient implementation, which is deemed critical for many important applications. For action spotting, details of a real-time GPU-based instantiation of the proposed approach are provided. Empirical evaluation of both action spotting and action recognition on challenging data sets suggests the efficacy of the proposed approach, with state-of-the-art performance documented on standard data sets.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: