Fast and Efficient FPGA-Based Feature Detection Employing the SURF Algorithm

Dimitris Bouris, Antonis Nikitakis, Ioannis Papaefstathiou
Dept. of Electron. & Comput. Eng., Tech. Univ. of Crete, Chania, Greece
18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2010


   title={Fast and Efficient FPGA-based Feature Detection employing the SURF algorithm},

   author={Bouris, D. and Nikitakis, A. and Papaefstathiou, I.},

   booktitle={2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines},





Source Source   



Feature detectors are schemes that locate and describe points or regions of ‘interest’ in an image. Today there are numerous machine vision applications needing efficient feature detectors that can work on Real-time; moreover, since this detection is one of the most time consuming tasks in several vision devices, the speed of the feature detection schemes severally affects the effectiveness of the complete systems. As a result, feature detectors are increasingly being implemented in state-of-the-art FPGAs. This paper describes an FPGA-based implementation of the SURF (Speeded-Up Robust Features) detector introduced by Bay, Ess, Tuytelaars and Van Gool; this algorithm is considered to be the most efficient feature detector algorithm available. Moreover, this is, to the best of our knowledge, the first implementation of this scheme in an FPGA. Our innovative system can support processing of standard video (640 x 480 pixels) at up to 56 frames per second while it outperforms a state-of-the-art dual-core Intel CPU by at least 8 times. Moreover, the proposed system, which is clocked at 200 MHz and consumes less than 20W, supports constantly a frame rate only 20% lower than the peak rate of a high-end GPU executing the same basic algorithm; the specified GPU consists of 128 floating point CPUs, clocked at 1.35 GHz and consumes more than 200W.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: