Goto

Collaborating Authors

 goal recognition


Online Dynamic Goal Recognition in Gym Environments

Matan, Shamir, Osher, Elhadad, Ben, Nageris, Reuth, Mirsky

arXiv.org Artificial Intelligence

Goal Recognition (GR) is the task of inferring an agent's intended goal from partial observations of its behavior, typically in an online and one-shot setting. Despite recent advances in model-free GR, particularly in applications such as human-robot interaction, surveillance, and assistive systems, the field remains fragmented due to inconsistencies in benchmarks, domains, and evaluation protocols. To address this, we introduce gr-libs (https://github.com/MatanShamir1/gr_libs) and gr-envs (https://github.com/MatanShamir1/gr_envs), two complementary open-source frameworks that support the development, evaluation, and comparison of GR algorithms in Gym-compatible environments. gr-libs includes modular implementations of MDP-based GR baselines, diagnostic tools, and evaluation utilities. gr-envs provides a curated suite of environments adapted for dynamic and goal-directed behavior, along with wrappers that ensure compatibility with standard reinforcement learning toolkits. Together, these libraries offer a standardized, extensible, and reproducible platform for advancing GR research. Both packages are open-source and available on GitHub and PyPI.


Uncertainty-Resilient Active Intention Recognition for Robotic Assistants

Saborío, Juan Carlos, Vinci, Marc, Lima, Oscar, Stock, Sebastian, Niecksch, Lennart, Günther, Martin, Sung, Alexander, Hertzberg, Joachim, Atzmüller, Martin

arXiv.org Artificial Intelligence

-- Purposeful behavior in robotic assistants requires the integration of multiple components and technological advances. Often, the problem is reduced to recognizing explicit prompts, which limits autonomy, or is oversimplified through assumptions such as near-perfect information. We argue that a critical gap remains unaddressed - specifically, the challenge of reasoning about the uncertain outcomes and perception errors inherent to human intention recognition. In response, we present a framework designed to be resilient to uncertainty and sensor noise, integrating real-time sensor data with a combination of planners. Our integrated framework has been successfully tested on a physical robot with promising results. Robotic assistants may be integrated into modern industrial environments, e.g., delivering tools, parts or modules interleaved with tidying the workspace. Such tasks, however, require a combination of robust planning, navigation, grasping, and perception-particularly when explicit commands are not available and the robot must identify and pursue goals, in collaborative spaces shared with people.



GRAML: Goal Recognition As Metric Learning

Shamir, Matan, Mirsky, Reuth

arXiv.org Artificial Intelligence

Goal Recognition (GR) is the problem of recognizing an agent's objectives based on observed actions. Recent data-driven approaches for GR alleviate the need for costly, manually crafted domain models. However, these approaches can only reason about a pre-defined set of goals, and time-consuming training is needed for new emerging goals. To keep this model-learning automated while enabling quick adaptation to new goals, this paper introduces GRAML: Goal Recognition As Metric Learning. GRAML uses a Siamese network to treat GR as a deep metric learning task, employing an RNN that learns a metric over an embedding space, where the embeddings for observation traces leading to different goals are distant, and embeddings of traces leading to the same goals are close. This metric is especially useful when adapting to new goals, even if given just one example observation trace per goal. Evaluated on a versatile set of environments, GRAML shows speed, flexibility, and runtime improvements over the state-of-the-art GR while maintaining accurate recognition.


Intention Recognition in Real-Time Interactive Navigation Maps

Zhao, Peijie, Arefin, Zunayed, Meneguzzi, Felipe, Pereira, Ramon Fraga

arXiv.org Artificial Intelligence

In this demonstration, we develop IntentRec4Maps, a system to recognise users' intentions in interactive maps for real-world navigation. IntentRec4Maps uses the Google Maps Platform as the real-world interactive map, and a very effective approach for recognising users' intentions in real-time. We showcase the recognition process of IntentRec4Maps using two different Path-Planners and a Large Language Model (LLM). GitHub: https://github.com/PeijieZ/IntentRec4Maps


Goal Recognition using Actor-Critic Optimization

Nageris, Ben, Meneguzzi, Felipe, Mirsky, Reuth

arXiv.org Artificial Intelligence

Goal Recognition aims to infer an agent's goal from a sequence of observations. Existing approaches often rely on manually engineered domains and discrete representations. Deep Recognition using Actor-Critic Optimization (DRACO) is a novel approach based on deep reinforcement learning that overcomes these limitations by providing two key contributions. First, it is the first goal recognition algorithm that learns a set of policy networks from unstructured data and uses them for inference. Second, DRACO introduces new metrics for assessing goal hypotheses through continuous policy representations. DRACO achieves state-of-the-art performance for goal recognition in discrete settings while not using the structured inputs used by existing approaches. Moreover, it outperforms these approaches in more challenging, continuous settings at substantially reduced costs in both computing and memory. Together, these results showcase the robustness of the new algorithm, bridging traditional goal recognition and deep reinforcement learning.


Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach

Alshehri, Abeer, Abdulrahman, Amal, Alamri, Hajar, Miller, Tim, Vered, Mor

arXiv.org Artificial Intelligence

Goal recognition (GR) involves inferring an agent's unobserved goal from a sequence of observations. This is a critical problem in AI with diverse applications. Traditionally, GR has been addressed using 'inference to the best explanation' or abduction, where hypotheses about the agent's goals are generated as the most plausible explanations for observed behavior. Alternatively, some approaches enhance interpretability by ensuring that an agent's behavior aligns with an observer's expectations or by making the reasoning behind decisions more transparent. In this work, we tackle a different challenge: explaining the GR process in a way that is comprehensible to humans. We introduce and evaluate an explainable model for goal recognition (GR) agents, grounded in the theoretical framework and cognitive processes underlying human behavior explanation. Drawing on insights from two human-agent studies, we propose a conceptual framework for human-centered explanations of GR. Using this framework, we develop the eXplainable Goal Recognition (XGR) model, which generates explanations for both why and why not questions. We evaluate the model computationally across eight GR benchmarks and through three user studies. The first study assesses the efficiency of generating human-like explanations within the Sokoban game domain, the second examines perceived explainability in the same domain, and the third evaluates the model's effectiveness in aiding decision-making in illegal fishing detection. Results demonstrate that the XGR model significantly enhances user understanding, trust, and decision-making compared to baseline models, underscoring its potential to improve human-agent collaboration.


Fact Probability Vector Based Goal Recognition

Wilken, Nils, Cohausz, Lea, Bartelt, Christian, Stuckenschmidt, Heiner

arXiv.org Artificial Intelligence

We present a new approach to goal recognition that involves comparing observed facts with their expected probabilities. These probabilities depend on a specified goal g and initial state s0. Our method maps these probabilities and observed facts into a real vector space to compute heuristic values for potential goals. These values estimate the likelihood of a given goal being the true objective of the observed agent. As obtaining exact expected probabilities for observed facts in an observation sequence is often practically infeasible, we propose and empirically validate a method for approximating these probabilities. Our empirical results show that the proposed approach offers improved goal recognition precision compared to state-of-the-art techniques while reducing computational complexity.


ODGR: Online Dynamic Goal Recognition

Shamir, Matan, Elhadad, Osher, Taylor, Matthew E., Mirsky, Reuth

arXiv.org Artificial Intelligence

Traditionally, Reinforcement Learning (RL) problems are aimed at optimization of the behavior of an agent. This paper proposes a novel take on RL, which is used to learn the policy of another agent, to allow real-time recognition of that agent's goals. Goal Recognition (GR) has traditionally been framed as a planning problem where one must recognize an agent's objectives based on its observed actions. Recent approaches have shown how reinforcement learning can be used as part of the GR pipeline, but are limited to recognizing predefined goals and lack scalability in domains with a large goal space. This paper formulates a novel problem, "Online Dynamic Goal Recognition" (ODGR), as a first step to address these limitations. Contributions include introducing the concept of dynamic goals into the standard GR problem definition, revisiting common approaches by reformulating them using ODGR, and demonstrating the feasibility of solving ODGR in a navigation domain using transfer learning. These novel formulations open the door for future extensions of existing transfer learning-based GR methods, which will be robust to changing and expansive real-time environments.


Predicting Customer Goals in Financial Institution Services: A Data-Driven LSTM Approach

Estornell, Andrew, Vasileiou, Stylianos Loukas, Yeoh, William, Borrajo, Daniel, Silva, Rui

arXiv.org Artificial Intelligence

By incorporating state-space graph in recent years, driven by rapid technological advancements, embeddings into the LSTM model, we further enrich the evolving customer expectations, and increased model's understanding of the relationships and dependencies competition. As customers demand more personalized and among various features within the dataset, which may convenient services, financial institutions are under pressure lead to improved performance. This combination of LSTM to develop a deeper understanding of their clients' needs and models and state graph embeddings offers a more scalable preferences. This has led to a growing interest in leveraging and efficient solution in predicting customer goals and actions, data-driven approaches to gain insights into customer behavior while maintaining a high level of accuracy and robustness and predict future actions.