13457

A Real-time GPU Implementation of the SIFT Algorithm for Large-Scale Video Analysis Tasks

Hannes Fassold, Jakub Rosner
JOANNEUM RESEARCH
Joanneum Research, 2015

@article{fassold2015real,

   author={Fassold, Hannes and Rosner, Jakub},

   title={A Real-time GPU Implementation of the SIFT Algorithm for Large-Scale Video Analysis Tasks},

   year={2015}

}

Download Download (PDF)   View View   Source Source   

2105

views

The SIFT algorithm is one of the most popular feature extraction methods and therefore widely used in all sort of video analysis tasks like instance search and duplicate/near-duplicate detection. We present an efficient GPU implementation of the SIFT descriptor extraction algorithm using CUDA. The major steps of the algorithm are presented and for each step we describe how to efficiently parallelize it massively, how to take advantage of the unique capabilities of the GPU like shared memory / texture memory and how to avoid or minimize common GPU performance pitfalls. We compare the GPU implementation with the reference CPU implementation in terms of runtime and quality and achieve a speedup factor of approximately 3 – 5 for SD and 5 – 6 for Full HD video with respect to a multi-threaded CPU implementation, allowing us to run the SIFT descriptor extraction algorithm in real-time on SD video. Furthermore, quality tests show that the GPU implementation gives the same quality as the reference CPU implementation from the HessSIFT library. We further describe the benefits of GPU-accelerated SIFT descriptor calculation for video analysis applications such as near-duplicate video detection.
Rating: 2.3/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: