From contrastive to abductive explanations and back again

Alexey Ignatiev, Nina Narodytska, Nicholas Asher, Joao Marques-Silva

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

57 Citations (Scopus)

Abstract

Explanations of Machine Learning (ML) models often address a question. Such explanations can be related with selecting feature-value pairs which are sufficient for the prediction. Recent work has investigated explanations that address a question, i.e. finding a change of feature values that guarantee a change of prediction. Given their goals, these two forms of explaining predictions of ML models appear to be mostly unrelated. However, this paper demonstrates otherwise, and establishes a rigorous formal relationship between and explanations. Concretely, the paper proves that, for any given instance, explanations are minimal hitting sets of explanations and vice-versa. Furthermore, the paper devises novel algorithms for extracting and enumerating both forms of explanations.

Original languageEnglish
Title of host publicationXIXth International Conference of the Italian Association for Artificial Intelligence Virtual Event, November 25–27, 2020 Revised Selected Papers
EditorsMatteo Baldoni, Stefania Bandini
Place of PublicationCham Switzerland
PublisherSpringer
Pages335-355
Number of pages21
ISBN (Electronic)9783030770914
ISBN (Print)9783030770907
DOIs
Publication statusPublished - 2021
EventInternational Conference of the Italian Association for Artificial Intelligence 2020 - Online, Italy
Duration: 24 Nov 202027 Nov 2020
Conference number: 19th
https://link.springer.com/book/10.1007/978-3-030-77091-4 (Proceedings)

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume12414
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceInternational Conference of the Italian Association for Artificial Intelligence 2020
Abbreviated titleAIxIA 2020
Country/TerritoryItaly
Period24/11/2027/11/20
Internet address

Cite this