15701

GPU-FV: Realtime Fisher Vector and Its Applications in Video Monitoring

Wenying Ma, Liangliang Cao, Lei Yu, Guoping Long, Yucheng Li
State Key Laboratory of Computer Science, Laboratory of Parallel Software and Computational science, Institute of Software, Chinese Academy of Sciences
arXiv:1604.03498 [cs.CV], (12 Apr 2016)

@article{ma2016gpufv,

   title={GPU-FV: Realtime Fisher Vector and Its Applications in Video Monitoring},

   author={Ma, Wenying and Cao, Liangliang and Yu, Lei and Long, Guoping and Li, Yucheng},

   year={2016},

   month={apr},

   archivePrefix={"arXiv"},

   primaryClass={cs.CV}

}

Fisher vector has been widely used in many multimedia retrieval and visual recognition applications with good performance. However, the computation complexity prevents its usage in real-time video monitoring. In this work, we proposed and implemented GPU-FV, a fast Fisher vector extraction method with the help of modern GPUs. The challenge of implementing Fisher vector on GPUs lies in the data dependency in feature extraction and expensive memory access in Fisher vector computing. To handle these challenges, we carefully designed GPU-FV in a way that utilizes the computing power of GPU as much as possible, and applied optimizations such as loop tiling to boost the performance. GPU-FV is about 12 times faster than the CPU version, and 50% faster than a non-optimized GPU implementation. For standard video input (320*240), GPU-FV can process each frame within 34ms on a model GPU. Our experiments show that GPU-FV obtains a similar recognition accuracy as traditional FV on VOC 2007 and Caltech 256 image sets. We also applied GPU-FV for realtime video monitoring tasks and found that GPU-FV outperforms a number of previous works. Especially, when the number of training examples are small, GPU-FV outperforms the recent popular deep CNN features borrowed from ImageNet. The code can be downloaded from the following link.
Rating: 0.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: