Human centered explanation for goal recognition system.

Abeer Alshehri, Tim Miller, Mor Vered, Hajar Alamri

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

This paper contributes to the ongoing work in XAI
by exploring explainable goal recognition (GR)
from a human-centered perspective. We described
a human-centred study that informs a data-driven
approach to understanding how people explain goal
recognition tasks, with the future aim to build an
explanatory model for GR. In our study, the participants attempt to infer an agent’s goal given some observed behavior, and then provide explanations (why, why not, or both) for those inferred goals. Using thematic analysis process, we identified 11 codes from within 864 explanations of agents performing optimally, suboptimally, or irrationally in a Sokoban game variant. Interpreted with the existing theory of behavior explanation, we built a preliminary model for goal recognition explanations.
Original languageEnglish
Title of host publicationIJCAI-PRICAI Workshop On Explainable Artificial Intelligence (XAI), 2020
EditorsTim Miller, Rosina Weber, David Aha, Daniele Magazzeni, Ofra Amir
Place of PublicationMarina del Rey CA USA
PublisherAssociation for the Advancement of Artificial Intelligence (AAAI)
Number of pages7
Publication statusPublished - 2021
EventIJCAI-PRICAI Workshop On Explainable Artificial Intelligence (XAI) 2020 - Online, Japan
Duration: 8 Jan 20218 Jan 2021
https://sites.google.com/view/xai2020/home (Website and proceedings)

Conference

ConferenceIJCAI-PRICAI Workshop On Explainable Artificial Intelligence (XAI) 2020
Abbreviated titleXAI 2020
Country/TerritoryJapan
Period8/01/218/01/21
Internet address

Cite this