AI explainability framework for environmental management research

Research output: Contribution to journalArticleResearchpeer-review

25 Citations (Scopus)

Abstract

Deep learning networks powered by AI are essential predictive tools relying on image data availability and processing hardware advancements. However, little attention has been paid to explainable AI (XAI) in application fields, including environmental management. This study develops an explainability framework with a triadic structure to focus on input, AI model and output. The framework provides three main contributions. (1) A context-based augmentation of input data to maximize generalizability and minimize overfitting. (2) A direct monitoring of AI model layers and parameters to use leaner (lighter) networks suitable for edge device deployment, (3) An output explanation procedure focusing on interpretability and robustness of predictive decisions by AI networks. These contributions significantly advance state of the art in XAI for environmental management research, offering implications for improved understanding and utilization of AI networks in this field.

Original languageEnglish
Article number118149
Number of pages7
JournalJournal of Environmental Management
Volume342
DOIs
Publication statusPublished - 15 Sept 2023

Keywords

  • Environmental crisis
  • Environmental management research
  • Explainable AI (XAI)
  • Management and valorization of solid waste
  • Multimodal and generative pre-trained transformers
  • Responsible and fair artificial intelligence
  • Vision-language deep learning models

Cite this