Robust feature matching in 2.3 μs

Simon Taylor, Edward Rosten, Tom Drummond

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

57 Citations (Scopus)


In this paper we present a robust feature matching scheme in which features can be matched in 2.3μs. For a typical task involving 150 features per image, this results in a processing time of 500ps for feature extraction and matching. In order to achieve very fast matching we use simple features based on histograms of pixel intensities and an indexing scheme based on their joint distribution. The features are stored with a novel bit mask representation which requires only 44 bytes of memory per feature and allows computation of a dissimilarity score in 20ns. A training phase gives the patch-based features invariance to small viewpoint variations. Larger viewpoint variations are handled by training entirely independent sets of features from different viewpoints. A complete system is presented where a database of around 13,000 features is used to robustly localise a single planar target in just over a millisecond, including all steps from feature detection to model fitting. The resulting system shows comparable robustness to SIFT [8] and Ferns [14] while using a tiny fraction of the processing time, and in the latter case a fraction of the memory as well.

Original languageEnglish
Title of host publication2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009
Number of pages8
Publication statusPublished - 2009
Externally publishedYes
EventIEEE Conference on Computer Vision and Pattern Recognition 2009 - Miami, United States of America
Duration: 20 Jun 200925 Jun 2009 (Proceedings)


ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2009
Abbreviated titleCVPR 2009
Country/TerritoryUnited States of America
Internet address

Cite this