Learning to assist drone landings

Kal Backman, Dana Kulic, Hoam Chung

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Unmanned aerial vehicles (UAVs) are often used for navigating dangerous terrains, however they are difficult to pilot. Due to complex input-output mapping schemes, limited perception, the complex system dynamics and the need to maintain a safe operation distance, novice pilots experience difficulties in performing safe landings in obstacle filled environments. In this work we propose a shared autonomy approach that assists novice pilots to perform safe landings on one of several elevated platforms at a proficiency equal to or greater than experienced pilots. Our approach consists of two modules, a perceptual module and a policy module. The perceptual module compresses high dimensionality RGB-D images into a latent vector trained with a cross-modal variational auto-encoder. The policy module provides assistive control inputs trained with the reinforcement algorithm TD3. We conduct a user study (n=33) where participants land a simulated drone with and without the use of the assistant. Despite the goal platform not being known to the assistant, participants of all skill levels were able to outperform experienced participants while assisted in the task.

Original languageEnglish
Pages (from-to)3192-3199
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number2
DOIs
Publication statusPublished - Apr 2021

Keywords

  • Adaptation models
  • Aerial Systems: Perception and Autonomy
  • Aerospace electronics
  • Drones
  • Estimation
  • Human-Robot Collaboration
  • Intention Recognition
  • Task analysis
  • Training
  • Visualization

Cite this