A comprehensive analysis and parallelization of an image retrieval algorithm
The State Key Lab of ASIC & System, Fudan University
IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), 2011
@inproceedings{fang2011comprehensive,
title={A comprehensive analysis and parallelization of an image retrieval algorithm},
author={Fang, Z. and Yang, D. and Zhang, W. and Chen, H. and Zang, B.},
booktitle={Performance Analysis of Systems and Software (ISPASS), 2011 IEEE International Symposium on},
pages={154–164},
organization={IEEE},
year={2011}
}
The prevalence of the Internet and cloud computing has made multimedia data, such as image data and video data, become major data types in our daily life. For example, many data-intensive applications, such as health care and video recommendation, involve collecting, indexing and retrieving tera-scale multimedia data every day. With such a huge amount of multimedia data to process, the processing speed has been one of the major challenges to guarantee real-time requirements. The advent of multi-core hardware has opened new opportunities to improve the effectiveness of multimedia data processing. In this paper, we make a comprehensive analysis on different potential parallelism, including pipeline parallelism, task parallelism at both scale level and block level, data parallelism, and their combinations, in a typical image retrieval algorithm called SURF, which is the core algorithm of many multimedia (i.e., image and video) retrieval applications. Experimental results show the following observations of parallelism in SURF: 1) when only one level parallelism is exploited, block-level parallelism is more efficient and scalable than other alternatives; 2) data parallelism cannot be ignored especially when parallel resources increase and 3) the combination of block-level parallelism and pipeline parallelism is the most efficient parallelizing manner for the studied image retrieval algorithm. Based on these observations, we have implemented a parallel image retrieval algorithm. It can be easily mapped onto different multi-core platforms with good scalability. On a commodity server machine with 16-core, the parallel implementation achieves a speedup of 13X, which is 84% faster than P-SURF, a previous state-of-the-art parallelization of SURF on CPU; while on GPGPU, it achieves a speedup of 46X, which is 53% faster than CUDA SURF, a previous state-of-the-art parallelization of SURF on GPGPU.
May 21, 2011 by hgpu