Schleibaum, Sören
ADESSE: Advice Explanations in Complex Repeated Decision-Making Environments
Schleibaum, Sören, Feng, Lu, Kraus, Sarit, Müller, Jörg P.
In the evolving landscape of human-centered AI, fostering a synergistic relationship between humans and AI agents in decision-making processes stands as a paramount challenge. This work considers a problem setup where an intelligent agent comprising a neural network-based prediction component and a deep reinforcement learning component provides advice to a human decision-maker in complex repeated decision-making environments. Whether the human decision-maker would follow the agent's advice depends on their beliefs and trust in the agent and on their understanding of the advice itself. To this end, we developed an approach named ADESSE to generate explanations about the adviser agent to improve human trust and decision-making. Computational experiments on a range of environments with varying model sizes demonstrate the applicability and scalability of ADESSE. Furthermore, an interactive game-based user study shows that participants were significantly more satisfied, achieved a higher reward in the game, and took less time to select an action when presented with explanations generated by ADESSE. These findings illuminate the critical role of tailored, human-centered explanations in AI-assisted decision-making.
AI for Explaining Decisions in Multi-Agent Environments
Kraus, Sarit, Azaria, Amos, Fiosina, Jelena, Greve, Maike, Hazon, Noam, Kolbe, Lutz, Lembcke, Tim-Benjamin, Müller, Jörg P., Schleibaum, Sören, Vollrath, Mark
M uller, 3 S oren Schleibaum, 3 Mark V ollrath 5 1 Department of Computer Science, Bar-Ilan University, Israel (email: sarit@cs.biu.ac.il) 2 Department of Computer Science, Ariel University, Israel 3 Department of Informatics, TU Clausthal, Germany 4 Chair of Information Management, Georg-August-Universitat G ottingen, Germany 5 Chair of Engineering and Traffic Psychology, TU Braunschweig, Germany Abstract Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: Explainable decisions in Multi-Agent Environments (xMASE). We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI system's decisions in multi-agent environments. Introduction Many AI systems need to make decisions in multi-agent environments where the agents, including people and robots, have possibly conflicting preferences. The system should balance between these preferences when making decisions regarding all agents.