Resolving occlusion in active visual target search of high-dimensional robotic systems

Sina Radmard, David Meger, James J. Little, Elizabeth A. Croft

Research output: Contribution to journalArticleResearchpeer-review

9 Citations (Scopus)

Abstract

We propose an algorithm for handling visual occlusions that disrupt visual tracking of high-dimensional eye-in-hand systems. Our algorithm allows a robot to look behind an occluder during active visual target search and reacquire its target in an online manner. A particle filter continuously estimates the target location and an enhanced observation model updates the target belief state. Meanwhile, we build a simple but efficient map of the occluder boundaries to compute potential occlusion-clearing motions. Our mixed-initiative cost function balances the goal of gaining more information about the target and occluder boundary while minimizing the sensor action cost. A data-driven planner uses informed samples to strike a balance between target search and information gain to avoid exhaustive mapping of the three-dimensional occluder into Configuration space. We demonstrate the capabilities of our algorithm in simulation and a real-world experiment. We also show that our proposed solvers outperform a common approach in the literature. Our results indicate that our algorithm can quickly obtain clear views of the target when occlusion is persistent and significant camera motion is required.

Original languageEnglish
Pages (from-to)616-629
Number of pages14
JournalIEEE Transactions on Robotics
Volume34
Issue number3
DOIs
Publication statusPublished - 1 Jun 2018

Keywords

  • Active visual search
  • occlusion resolution
  • online motion planning

Cite this