Visual localization under appearance change: filtering approaches

Anh-Dzung Doan, Yasir Latif, Tat-Jun Chin, Yu Liu, Shin-Fang Ch’ng, Thanh-Toan Do, Ian Reid

Research output: Contribution to journalArticleResearchpeer-review

Abstract

A major focus of current research on place recognition is visual localization for autonomous driving. In this scenario, as cameras will be operating continuously, it is realistic to expect videos as an input to visual localization algorithms, as opposed to the single-image querying approach used in other visual localization works. In this paper, we show that exploiting temporal continuity in the testing sequence significantly improves visual localization—qualitatively and quantitatively. Although intuitive, this idea has not been fully explored in recent works. To this end, we propose two filtering approaches to exploit the temporal smoothness of image sequences: (i) filtering on discrete domain with hidden Markov model, and (ii) filtering on continuous domain with Monte Carlo-based visual localization. Our approaches rely on local features with an encoding technique to represent an image as a single vector. The experimental results on synthetic and real datasets show that our proposed methods achieve better results than state of the art (i.e., deep learning-based pose regression approaches) for the task on visual localization under significant appearance change. Our synthetic dataset and source code are made publicly available (https://sites.google.com/view/g2d-software/home; https://github.com/dadung/Visual-Localization-Filtering).

Original languageEnglish
Number of pages14
JournalNeural Computing and Applications
DOIs
Publication statusAccepted/In press - 17 Sep 2020
Externally publishedYes

Keywords

  • Autonomous driving
  • Place recognition
  • Robotics
  • Visual localization

Cite this