Joint estimation of expertise and reward preferences from human demonstrations

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

When a robot learns from human examples, most approaches assume that the human partner provides examples of optimal behavior. However, there are applications in which the robot learns from nonexpert humans. We argue that the robot should learn not only about the human's objectives, but also about their expertise level. The robot could then leverage this joint information to reduce or increase the frequency at which it provides assistance to its human's partner or be more cautious when learning new skills from novice users. Similarly, by taking into account the human's expertise, the robot would also be able to infer a human's true objectives even when the human fails to properly demonstrate these objectives due to a lack of expertise. In this article, we propose to jointly infer the expertise level and the objective function of a human given observations of their (possibly) nonoptimal demonstrations. Two inference approaches are proposed. In the first approach, inference is done over a finite discrete set of possible objective functions and expertise levels. In the second approach, the robot optimizes over the space of all possible hypotheses and finds the objective function and the expertise level that best explain the observed human behavior. We demonstrate our proposed approaches both in simulation and with real user data.

Original languageEnglish
Pages (from-to)681-698
Number of pages18
JournalIEEE Transactions on Robotics
Volume39
Issue number1
DOIs
Publication statusPublished - Feb 2023

Keywords

  • Behavioral sciences
  • Expertise inference
  • Hidden Markov models
  • human factors
  • learning and adaptive systems
  • learning from demonstration
  • Linear programming
  • Predictive models
  • Reliability
  • Robots
  • Task analysis

Cite this