Exploring local explanations of nonlinear models using animated linear projections

Nicholas Spyrison, Dianne Cook, Przemyslaw Biecek

Research output: Contribution to journalArticleResearchpeer-review

1 Citation (Scopus)

Abstract

The increased predictive power of machine learning models comes at the cost of increased complexity and loss of interpretability, particularly in comparison to parametric statistical models. This trade-off has led to the emergence of eXplainable AI (XAI) which provides methods, such as local explanations (LEs) and local variable attributions (LVAs), to shed light on how a model use predictors to arrive at a prediction. These provide a point estimate of the linear variable importance in the vicinity of a single observation. However, LVAs tend not to effectively handle association between predictors. To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour. This is also useful for learning how a model has made a mistake, or the effect of outliers, or the clustering of observations. The approach is illustrated with examples from categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) response models. The methods are implemented in the R package cheem, available on CRAN.

Original languageEnglish
Number of pages25
JournalComputational Statistics
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Explainable artificial intelligence
  • Grand tour
  • Local explanations
  • Nonlinear model interpretability
  • Radial tour
  • Visual analytics

Cite this