Anytime Approximate Formal Feature Attribution

Research output: Chapter in Book/Report/Conference proceedingConference PaperResearchpeer-review

Abstract

Widespread use of artificial intelligence (AI) algorithms and machine learning (ML) models on the one hand and a number of crucial issues pertaining to them warrant the need for explainable artificial intelligence (XAI). A key explainability question is: given this decision was made, what are the input features which contributed to the decision? Although a range of XAI approaches exist to tackle this problem, most of them have significant limitations. Heuristic XAI approaches suffer from the lack of quality guarantees, and often try to approximate Shapley values, which is not the same as explaining which features contribute to a decision. A recent alternative is so-called formal feature attribution (FFA), which defines feature importance as the fraction of formal abductive explanations (AXp’s) containing the given feature. This measures feature importance from the view of formally reasoning about the model’s behavior. Namely, given a system of constraints logically representing the ML model of interest, computing an AXp requires finding a minimal unsatisfiable subset (MUS) of the system. It is challenging to compute FFA using its definition because that involves counting over all AXp’s (equivalently, counting over MUSes), although one can approximate it. Based on these results, this paper makes several contributions. First, it gives compelling evidence that computing FFA is intractable, even if the set of contrastive formal explanations (CXp’s), which correspond to minimal correction subsets (MCSes) of the logical system, is provided, by proving that the problem is #P-hard. Second, by using the duality between MUSes and MCSes, it proposes an efficient heuristic to switch from MCS enumeration to MUS enumeration on-the-fly resulting in an adaptive explanation enumeration algorithm effectively approximating FFA in an anytime fashion. Finally, experimental results obtained on a range of widely used datasets demonstrate the effectiveness of the proposed FFA approximation approach in terms of the error of FFA approximation as well as the number of explanations computed and their diversity given a fixed time limit.

Original languageEnglish
Title of host publication27th International Conference on Theory and Applications of Satisfiability Testing
EditorsSupratik Chakraborty, Jie-Hong Roland Jiang
Place of PublicationSaarbrücken/Wadern Germany
PublisherSchloss Dagstuhl
Number of pages23
ISBN (Electronic)9783959773348
DOIs
Publication statusPublished - 2024
EventInternational Conference on Theory and Applications of Satisfiability Testing 2024 - Pune, India
Duration: 21 Aug 202424 Aug 2024
Conference number: 27th
https://drops.dagstuhl.de/entities/volume/LIPIcs-volume-305 (Proceedings)
https://satisfiability.org/SAT24/ (Website)

Publication series

NameLeibniz International Proceedings in Informatics, LIPIcs
PublisherSchloss Dagstuhl
Volume305
ISSN (Print)1868-8969

Conference

ConferenceInternational Conference on Theory and Applications of Satisfiability Testing 2024
Abbreviated titleSAT 2024
Country/TerritoryIndia
CityPune
Period21/08/2424/08/24
Internet address

Keywords

  • Explainable AI
  • Formal Feature Attribution
  • Minimal Unsatisfiable Subsets
  • MUS Enumeration
  • ARC Training Centre in Optimisation Technologies, Integrated Methodologies, and Applications (OPTIMA)

    Smith-Miles, K. (Primary Chief Investigator (PCI)), Stuckey, P. (Chief Investigator (CI)), Taylor, P. G. (Chief Investigator (CI)), Ernst, A. (Chief Investigator (CI)), Aickelin, U. (Chief Investigator (CI)), Garcia De La Banda Garcia, M. (Chief Investigator (CI)), Pearce, A. (Chief Investigator (CI)), Wallace, M. (Chief Investigator (CI)), Bondell, H. (Chief Investigator (CI)), Hyndman, R. (Chief Investigator (CI)), Alpcan, T. (Chief Investigator (CI)), Thomas, D. A. (Chief Investigator (CI)), Anjomshoa, H. (Chief Investigator (CI)), Kirley, M. G. (Chief Investigator (CI)), Tack, G. (Chief Investigator (CI)), Costa, A. (Chief Investigator (CI)), Fackrell, M. (Chief Investigator (CI)), Zhang, L. (Chief Investigator (CI)), Glazebrook, K. (Partner Investigator (PI)), Branke, J. (Partner Investigator (PI)), O'Sullivan, B. (Partner Investigator (PI)), O'Shea, N. (Partner Investigator (PI)), Cheah, A. (Partner Investigator (PI)), Meehan, A. (Partner Investigator (PI)), Wetenhall, P. (Partner Investigator (PI)), Bowly, D. (Partner Investigator (PI)), Bridge, J. (Chief Investigator (CI)), Faka, S. (Partner Investigator (PI)), Mareels, I. (Partner Investigator (PI)), Coleman, R. A. (Partner Investigator (PI)), Crook, J. (Partner Investigator (PI)), Liebman, A. (Chief Investigator (CI)) & Aleti, A. (Chief Investigator (CI))

    Equans Services Australia Pty Limited

    23/09/2123/09/26

    Project: Research

Cite this