Explainable Artificial Intelligence (XAI) for Increasing User Trust in Deep Reinforcement Learning Driven Autonomous Systems
Druce, Jeff, Harradon, Michael, Tittle, James
–arXiv.org Artificial Intelligence
We consider the problem of providing users of deep Reinforcement Learning (RL) based systems with a better understanding of when their output can be trusted. We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation: a graphical depiction of the systems generalization and performance in the current game state, how well the agent would play in semantically similar environments, and a narrative explanation of what the graphical information implies. We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment. The results demonstrate a statistically significant increase in user trust and acceptance of the AI system with explanation, versus the AI system without explanation.
arXiv.org Artificial Intelligence
Jun-7-2021
- Country:
- North America > United States (0.69)
- Genre:
- Research Report > New Finding (0.86)
- Industry:
- Technology: