Accelerating adaptive background subtraction with GPU and CBEA architecture
Pennsylvania State Univ., University Park, PA, USA
IEEE Workshop on Signal Processing Systems (SIPS), 2010
@inproceedings{poremba2010accelerating,
title={Accelerating adaptive background subtraction with GPU and CBEA architecture},
author={Poremba, M. and Xie, Y. and Wolf, M.},
booktitle={Signal Processing Systems (SIPS), 2010 IEEE Workshop on},
pages={305–310},
organization={IEEE},
year={2010}
}
Background subtraction is an important problem in computer vision and is a fundamental task for many applications. In the past, background subtraction has been limited by the amount of computing power available. The task was performed on small frames and, in the case of adaptive algorithms, with relatively small models to achieve real-time performance. With the introduction of multi- and many-core chip-multiprocessors (CMP), more computing resources are available to handle this important task. The advent of specialized CMP, such as NVIDIA’s Compute Unified Device Architecture (CUDA) and IBM’s Cell Broadband Engine Architecture (CBEA), provides new opportunities to accelerate real-time video applications. In this paper, we evaluate the acceleration of background subtraction with these two different chip-multiprocessor (CMP) architectures (CUDA and CBEA), such that larger image frames can be processed with more models while still achieving real-time performance. Our analysis results show impressive performance improvement over a baseline implementation that uses a multi-threaded dual-core CPU. Specifically, the CUDA implementation and CBEA implementation can achieve up to 17.82X and 2.77X improvement, respectively.
May 26, 2011 by hgpu