TY - JOUR
T1 - A global taxonomy of interpretable AI
T2 - unifying the terminology for the technical and social sciences
AU - Graziani, Mara
AU - Dutkeiwicz, Lidia
AU - Calvaresi, David
AU - Amorim, Jose Pereira
AU - Yordanova, Katerina
AU - Vered, Mor
AU - Nair, Rahul
AU - Abreu, Pedro Henreques
AU - Blanke, Tobias
AU - Pulignano, Valeria
AU - Prior, John O
AU - Lauwaert, Lode
AU - Reijers, Wessel
AU - Depeursinge, Adrien
AU - Andrearczyk, Vincent
AU - Muller, Henning
N1 - Funding Information:
This work was supported by AI4Media of the European Union’s Horizon 2020 (EU-H2020) research and innovation program under grant agreement No. 951911 and the Hasler Foundation with project numbers 21042 and 21064. V. Pulignano would like to acknowledge the funding by European Research Council (ERC) under the EU-H2020 grant agreement No. 833577—-ResPecTMe project ”Resolving Precariousness: Advancing the Theory and Measurement of Precariousness across the paid/unpaid work continuum”. J. Amorim was supported by the FCT Research Grant SFRH/BD/136786/2018."
Publisher Copyright:
© 2022, The Author(s).
PY - 2023/4
Y1 - 2023/4
N2 - Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable, explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are “weighted" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a—highly needed—standard for the communication among interdisciplinary areas of AI.
AB - Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable, explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are “weighted" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a—highly needed—standard for the communication among interdisciplinary areas of AI.
KW - Explainable artificial intelligence
KW - Interpretability
KW - Machine learning
UR - http://www.scopus.com/inward/record.url?scp=85137813675&partnerID=8YFLogxK
U2 - 10.1007/s10462-022-10256-8
DO - 10.1007/s10462-022-10256-8
M3 - Article
AN - SCOPUS:85137813675
SN - 1573-7462
VL - 56
SP - 3473
EP - 3504
JO - Artificial Intelligence Review
JF - Artificial Intelligence Review
ER -