No silver bullet: interpretable ML models must be explained

Joao Marques-Silva, Alexey Ignatiev

Research output: Contribution to journalArticleResearchpeer-review

4 Citations (Scopus)


Recent years witnessed a number of proposals for the use of the so-called interpretable models in specific application domains. These include high-risk, but also safety-critical domains. In contrast, other works reported some pitfalls of machine learning model interpretability, in part justified by the lack of a rigorous definition of what an interpretable model should represent. This study proposes to relate interpretability with the ability of a model to offer explanations of why a prediction is made given some point in feature space. Under this general goal of offering explanations to predictions, this study reveals additional limitations of interpretable models. Concretely, this study considers application domains where the purpose is to help human decision makers to understand why some prediction was made or why was not some other prediction made, and where irreducible (and so minimal) information is sought. In such domains, this study argues that answers to such why (or why not) questions can exhibit arbitrary redundancy, i.e., the answers can be simplified, as long as these answers are obtained by human inspection of the interpretable ML model representation.

Original languageEnglish
Article number1128212
Number of pages15
JournalFrontiers in Artificial Intelligence
Publication statusPublished - 24 Apr 2023


  • decision lists
  • decision sets
  • decision trees
  • explainable AI (XAI)
  • logic-based explainability
  • model interpretability

Cite this