PlumX Metrics
Embed PlumX Metrics

Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty

International Journal of Human-Computer Studies, ISSN: 1071-5819, Vol: 165, Page: 102839
2022
  • 36
    Citations
  • 0
    Usage
  • 134
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    36
    • Citation Indexes
      35
    • Policy Citations
      1
      • Policy Citation
        1
  • Captures
    134

Article Description

In recent years, AI explainability (XAI) has received wide attention. Although XAI is expected to play a positive role in decision-making and advice acceptance, various opposing effects have also been found. The opposing effects of XAI highlight the critical role of context, especially human factors, in understanding XAI's impacts. This study investigates the effects of providing three types of post-hoc explanations (alternative advice, prediction confidence scores, and prediction rationale) on two context-specific user decision-making outcomes (AI advice acceptance and advice adoption). Our field experiment results show that users’ epistemic uncertainty matters when understanding XAI's impacts. As users’ epistemic uncertainty increases, only providing prediction rationale is beneficial, whereas providing alternative advice and showing prediction confidence scores may hinder users’ advice acceptance. Our study contributes to the emerging literature on the human aspects of XAI by clarifying XAI and showing that XAI may not always be desirable. It also contributes by highlighting the importance of considering user profiles when predicting XAI's impacts, designing XAI, and providing professional services with AI.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know