Abstract
Explanations of Machine Learning (ML) models often address a question. Such explanations can be related with selecting feature-value pairs which are sufficient for the prediction. Recent work has investigated explanations that address a question, i.e. finding a change of feature values that guarantee a change of prediction. Given their goals, these two forms of explaining predictions of ML models appear to be mostly unrelated. However, this paper demonstrates otherwise, and establishes a rigorous formal relationship between and explanations. Concretely, the paper proves that, for any given instance, explanations are minimal hitting sets of explanations and vice-versa. Furthermore, the paper devises novel algorithms for extracting and enumerating both forms of explanations.
| Original language | English |
|---|---|
| Title of host publication | XIXth International Conference of the Italian Association for Artificial Intelligence Virtual Event, November 25–27, 2020 Revised Selected Papers |
| Editors | Matteo Baldoni, Stefania Bandini |
| Place of Publication | Cham Switzerland |
| Publisher | Springer |
| Pages | 335-355 |
| Number of pages | 21 |
| ISBN (Electronic) | 9783030770914 |
| ISBN (Print) | 9783030770907 |
| DOIs | |
| Publication status | Published - 2021 |
| Event | International Conference of the Italian Association for Artificial Intelligence 2020 - Online, Italy Duration: 24 Nov 2020 → 27 Nov 2020 Conference number: 19th https://link.springer.com/book/10.1007/978-3-030-77091-4 (Proceedings) |
Publication series
| Name | Lecture Notes in Computer Science |
|---|---|
| Publisher | Springer |
| Volume | 12414 |
| ISSN (Print) | 0302-9743 |
| ISSN (Electronic) | 1611-3349 |
Conference
| Conference | International Conference of the Italian Association for Artificial Intelligence 2020 |
|---|---|
| Abbreviated title | AIxIA 2020 |
| Country/Territory | Italy |
| Period | 24/11/20 → 27/11/20 |
| Internet address |
|