Skip to main navigation Skip to search Skip to main content

Demystifying AI for the workforce: the role of explainable AI in worker acceptance and management relations

Miles M. Yang, Ying Lu, Fang Lee Cooke

Research output: Contribution to journalArticleResearchpeer-review

Abstract

In the digital era, organizations are increasingly leveraging artificial intelligence (AI) to optimize their operations and decision-making. However, the opaqueness of AI processes raises concerns over trust, fairness, and autonomy, especially in the gig economy, where AI-driven management is ubiquitous. This study investigates how explainable AI (xAI), through the comparative use of counterfactual versus factual and local versus global explanations, shapes gig workers’ acceptance of AI-driven decisions and management relations, drawing on cognitive load theory. Using experimental data from 1107 gig workers, we found that both counterfactual (relative to factual) and local (relative to global) explanations increase the acceptance of AI decisions. However, the combination of local and counterfactual explanations can overwhelm workers, thereby reducing these positive effects. Furthermore, worker acceptance mediated the relationship between xAI explanations and management relations. A follow-up study using a simplified scenario and additional procedural controls confirmed the robustness of these effects. Our findings underscore the value of carefully tailored xAI in fostering equitable, transparent, and constructive organizational practices in digitally mediated work environments.

Original languageEnglish
Number of pages35
JournalJournal of Management Studies
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • acceptance of AI-driven decision
  • explainable AI (xAI)
  • gig worker
  • management relations
  • scenario-based experiment

Cite this