TY - JOUR
T1 - LoMEF
T2 - a framework to produce local explanations for global model time series forecasts
AU - Rajapaksha, Dilini
AU - Bergmeir, Christoph
AU - Hyndman, Rob J.
N1 - Funding Information:
This research was supported by the Australian Research Council under grant DE190100045 , a Facebook Statistics for Improving Insights and Decisions research award , Monash University Graduate Research funding, and MASSIVE – high-performance computing facility, Australia. We thank the anonymous referees and the associate editor for their comments that led to various improvements in the paper. We furthermore thank Brian Seaman for discussions that led to some ideas used in the paper.
Publisher Copyright:
© 2022 International Institute of Forecasters
PY - 2023/7
Y1 - 2023/7
N2 - Global forecasting models (GFMs) that are trained across a set of multiple time series have shown superior results in many forecasting competitions and real-world applications compared with univariate forecasting approaches. One aspect of the popularity of statistical forecasting models such as ETS and ARIMA is their relative simplicity and interpretability (in terms of relevant lags, trend, seasonality, and other attributes), while GFMs typically lack interpretability, especially relating to particular time series. This reduces the trust and confidence of stakeholders when making decisions based on the forecasts without being able to understand the predictions. To mitigate this problem, we propose a novel local model-agnostic interpretability approach to explain the forecasts from GFMs. We train simpler univariate surrogate models that are considered interpretable (e.g., ETS) on the predictions of the GFM on samples within a neighbourhood that we obtain through bootstrapping, or straightforwardly as the one-step-ahead global black-box model forecasts of the time series which needs to be explained. After, we evaluate the explanations for the forecasts of the global models in both qualitative and quantitative aspects such as accuracy, fidelity, stability, and comprehensibility, and are able to show the benefits of our approach.
AB - Global forecasting models (GFMs) that are trained across a set of multiple time series have shown superior results in many forecasting competitions and real-world applications compared with univariate forecasting approaches. One aspect of the popularity of statistical forecasting models such as ETS and ARIMA is their relative simplicity and interpretability (in terms of relevant lags, trend, seasonality, and other attributes), while GFMs typically lack interpretability, especially relating to particular time series. This reduces the trust and confidence of stakeholders when making decisions based on the forecasts without being able to understand the predictions. To mitigate this problem, we propose a novel local model-agnostic interpretability approach to explain the forecasts from GFMs. We train simpler univariate surrogate models that are considered interpretable (e.g., ETS) on the predictions of the GFM on samples within a neighbourhood that we obtain through bootstrapping, or straightforwardly as the one-step-ahead global black-box model forecasts of the time series which needs to be explained. After, we evaluate the explanations for the forecasts of the global models in both qualitative and quantitative aspects such as accuracy, fidelity, stability, and comprehensibility, and are able to show the benefits of our approach.
KW - Bootstrapping
KW - Explainability
KW - Global models
KW - Local interpretability
KW - Time series forecasting
UR - http://www.scopus.com/inward/record.url?scp=85136288023&partnerID=8YFLogxK
U2 - 10.1016/j.ijforecast.2022.06.006
DO - 10.1016/j.ijforecast.2022.06.006
M3 - Article
AN - SCOPUS:85136288023
SN - 0169-2070
VL - 39
SP - 1424
EP - 1447
JO - International Journal of Forecasting
JF - International Journal of Forecasting
IS - 3
ER -