Assessing heuristic Machine Learning explanations with model counting

Nina Narodytska, Aditya Shrotri, Kuldeep S. Meel, Alexey Ignatiev, Joao Marques-Silva

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

41 Citations (Scopus)


Machine Learning (ML) models are widely used in decision making procedures in finance, medicine, education, etc. In these areas, ML outcomes can directly affect humans, e.g. by deciding whether a person should get a loan or be released from prison. Therefore, we cannot blindly rely on black box ML models and need to explain the decisions made by them. This motivated the development of a variety of ML-explainer systems, including LIME and its successor Anchor. Due to the heuristic nature of explanations produced by existing tools, it is necessary to validate them. We propose a SAT-based method to assess the quality of explanations produced by Anchor. We encode a trained ML model and an explanation for a given prediction as a propositional formula. Then, by using a state-of-the-art approximate model counter, we estimate the quality of the provided explanation as the number of solutions supporting it.

Original languageEnglish
Title of host publicationTheory and Applications of Satisfiability Testing – SAT 2019
Subtitle of host publication22nd International Conference, SAT 2019 Lisbon, Portugal, July 9–12, 2019 Proceedings
EditorsMikoláš Janota, Inês Lynce
Place of PublicationCham Switzerland
Number of pages12
ISBN (Electronic)9783030242589
ISBN (Print)9783030242572
Publication statusPublished - 2019
Externally publishedYes
EventInternational Conference on Theory and Applications of Satisfiability Testing 2019 - Lisbon, Portugal
Duration: 7 Jul 201912 Jul 2019
Conference number: 22nd

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferenceInternational Conference on Theory and Applications of Satisfiability Testing 2019
Abbreviated titleSAT 2019
Internet address

Cite this