Not enough data to create a plot.
Try a different view from the menu above.
Vered, Mor
Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach
Alshehri, Abeer, Abdulrahman, Amal, Alamri, Hajar, Miller, Tim, Vered, Mor
Goal recognition (GR) involves inferring an agent's unobserved goal from a sequence of observations. This is a critical problem in AI with diverse applications. Traditionally, GR has been addressed using 'inference to the best explanation' or abduction, where hypotheses about the agent's goals are generated as the most plausible explanations for observed behavior. Alternatively, some approaches enhance interpretability by ensuring that an agent's behavior aligns with an observer's expectations or by making the reasoning behind decisions more transparent. In this work, we tackle a different challenge: explaining the GR process in a way that is comprehensible to humans. We introduce and evaluate an explainable model for goal recognition (GR) agents, grounded in the theoretical framework and cognitive processes underlying human behavior explanation. Drawing on insights from two human-agent studies, we propose a conceptual framework for human-centered explanations of GR. Using this framework, we develop the eXplainable Goal Recognition (XGR) model, which generates explanations for both why and why not questions. We evaluate the model computationally across eight GR benchmarks and through three user studies. The first study assesses the efficiency of generating human-like explanations within the Sokoban game domain, the second examines perceived explainability in the same domain, and the third evaluates the model's effectiveness in aiding decision-making in illegal fishing detection. Results demonstrate that the XGR model significantly enhances user understanding, trust, and decision-making compared to baseline models, underscoring its potential to improve human-agent collaboration.
Explainable Goal Recognition: A Framework Based on Weight of Evidence
Alshehri, Abeer, Miller, Tim, Vered, Mor
We introduce and evaluate an eXplainable Goal Recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems. Our model provides human-centered explanations that answer why? and why not? questions. We computationally evaluate the performance of our system over eight different domains. Using a human behavioral study to obtain the ground truth from human annotators, we further show that the XGR model can successfully generate human-like explanations. We then report on a study with 60 participants who observe agents playing Sokoban game and then receive explanations of the goal recognition output. We investigate participants' understanding obtained by explanations through task prediction, explanation satisfaction, and trust.
Let's Make It Personal, A Challenge in Personalizing Medical Inter-Human Communication
Vered, Mor, Dignum, Frank, Miller, Tim
Current AI approaches have frequently been used to help personalize many aspects of medical experiences and tailor them to a specific individuals' needs. However, while such systems consider medically-relevant information, they ignore socially-relevant information about how this diagnosis should be communicated and discussed with the patient. The lack of this capability may lead to mis-communication, resulting in serious implications, such as patients opting out of the best treatment. Consider a case in which the same treatment is proposed to two different individuals. The manner in which this treatment is mediated to each should be different, depending on the individual patient's history, knowledge, and mental state. While it is clear that this communication should be conveyed via a human medical expert and not a software-based system, humans are not always capable of considering all of the relevant aspects and traversing all available information. We pose the challenge of creating Intelligent Agents (IAs) to assist medical service providers (MSPs) and consumers in establishing a more personalized human-to-human dialogue. Personalizing conversations will enable patients and MSPs to reach a solution that is best for their particular situation, such that a relation of trust can be built and commitment to the outcome of the interaction is assured. We propose a four-part conceptual framework for personalized social interactions, expand on which techniques are available within current AI research and discuss what has yet to be achieved.
Online Goal Recognition as Reasoning over Landmarks
Vered, Mor (Bar Ilan University) | Pereira, Ramon Fraga (Pontifical Catholic University of Rio Grande do Sul, Brazil) | Magnaguagno, Mauricio Cecilio (Pontifical Catholic University of Rio Grande do Sul, Brazil) | Meneguzzi, Felipe (Pontifical Catholic University of Rio Grande do Sul, Brazil) | Kaminka, Gal A. (Bar Ilan University)
Online goal recognition is the problem of recognizing the goal of an agent based on an incomplete sequence of observations with as few observations as possible. Recognizing goals with minimal domain knowledge as an agent executes its plan requires efficient algorithms to sift through a large space of hypotheses. We develop an online approach to recognize goals in both continuous and discrete domains using a combination of goal mirroring and a generalized notion of landmarks adapted from the planning literature. Extensive experiments demonstrate the approach is more efficient and substantially more accurate than the state-of-the-art.
Plan Recognition in Continuous Domains
Kaminka, Gal A. (Bar Ilan University) | Vered, Mor (Bar Ilan University) | Agmon, Noa (Bar Ilan University)
Plan recognition is the task of inferring the plan of an agent, based on an incomplete sequence of its observed actions. Previous formulations of plan recognition commit early to discretizations of the environment and the observed agent's actions. This leads to reduced recognition accuracy. To address this, we first provide a formalization of recognition problems which admits continuous environments, as well as discrete domains. We then show that through mirroring---generalizing plan-recognition by planning---we can apply continuous-world motion planners in plan recognition. We provide formal arguments for the usefulness of mirroring, and empirically evaluate mirroring in more than a thousand recognition problems in three continuous domains and six classical planning domains.