Goto

Collaborating Authors

 Plan Recognition


Bisson

AAAI Conferences

Plan recognition, the problem of inferring the goals or plans of an observed agent, is a key element of situation awareness in human-machine and machine-machine interactions for many applications. Some plan recognition algorithms require knowledge about the potential behaviours of the observed agent in the form of a plan library, together with a decision model about how the observed agent uses the plan library to make decisions. It is however difficult to elicit and specify the decision model a priori. In this paper, we present a recursive neural network model that learns such a decision model automatically. We discuss promising experimental results of the approach with comparisons to selected state-of-the-art plan recognition algorithms on three benchmark domains.


Mirsky

AAAI Conferences

Plan recognition is one of the fundamental problems of AI, applicable to many domains, from user interfaces to cyber security. We focus on a class of algorithms that use plan libraries as input to the recognition process. Despite the prevalence of these approaches, they lack a standard representation, and have not been compared to each other on common test bed. This paper directly addresses this gap by providing a standard plan library representation and evaluation criteria to consider. Our representation is comprehensive enough to describe a variety of known plan recognition problems, yet it can be easily applied to existing algorithms, which can be evaluated using our defined criteria. We demonstrate this technique on two known algorithms, SBR and PHATT. We provide meaningful insights both about the differences and abilities of the algorithms. We show that SBR is superior to PHATT both in terms of computation time and space, but at the expense of functionality and compact representation. We also show that depth is the single feature of a plan library that increases the complexity of the recognition, regardless of the algorithm used.


Cardona-Rivera

AAAI Conferences

Interactive narratives suffer from the narrative paradox: the tension that exists between providing a coherent narrative experience and allowing a player free reign over what she can manipulate in the environment. Knowing what actions a player in such an environment intends to carry out would help in managing the narrative paradox, since it would allow us to anticipate potential threats to the intended narrative experience and potentially mediate or eliminate them. The process of observing player actions and attempting to come up with an explanation for those actions (i.e. the plan that the player is trying to carry out) is the problem of plan recognition. We adopt the framing of narratives as plans and leverage recent advances that cast plan recognition as planning to develop a symbolic plan recognition system as a proof-of-concept model of a player's reasoning in an interactive narrative environment. In this paper we outline the system architecture, report on performance metrics that demonstrate adequate performance for non-trivial domains, and discuss the implications of treating players as plan recognizers.


A Survey of Opponent Modeling in Adversarial Domains

Nashed, Samer | Zilberstein, Shlomo (UMass Amherst)

Journal of Artificial Intelligence Research

Opponent modeling is the ability to use prior knowledge and observations in order to predict the behavior of an opponent. This survey presents a comprehensive overview of existing opponent modeling techniques for adversarial domains, many of which must address stochastic, continuous, or concurrent actions, and sparse, partially observable payoff structures. We discuss all the components of opponent modeling systems, including feature extraction, learning algorithms, and strategy abstractions. These discussions lead us to propose a new form of analysis for describing and predicting the evolution of game states over time. We then introduce a new framework that facilitates method comparison, analyze a representative selection of techniques using the proposed framework, and highlight common trends among recently proposed methods. Finally, we list several open problems and discuss future research directions inspired by AI research on opponent modeling and related research in other disciplines.


Intention Recognition for Multiple Agents

Zhang, Zhang, Zeng, Yifeng, Chen, Yingke

arXiv.org Artificial Intelligence

Intention recognition is an important step to facilitate collaboration in multi-agent systems. Existing work mainly focuses on intention recognition in a single-agent setting and uses a descriptive model, e.g. Bayesian networks, in the recognition process. In this paper, we resort to a prescriptive approach to model agents' behaviour where which their intentions are hidden in implementing their plans. We introduce landmarks into the behavioural model therefore enhancing informative features for identifying common intentions for multiple agents. We further refine the model by focusing only action sequences in their plan and provide a light model for identifying and comparing their intentions. The new models provide a simple approach of grouping agents' common intentions upon partial plans observed in agents' interactions. We provide experimental results in support.


Contrastive Explanations of Plans through Model Restrictions

Krarup, Benjamin | Krivic, Senka (King's College London) | Magazzeni, Daniele (King's College London) | Long, Derek (King's College London) | Cashmore, Michael | Smith, David E. (PS Research)

Journal of Artificial Intelligence Research

In automated planning, the need for explanations arises when there is a mismatch between a proposed plan and the user’s expectation. We frame Explainable AI Planning as an iterative plan exploration process, in which the user asks a succession of contrastive questions that lead to the generation and solution of hypothetical planning problems that are restrictions of the original problem. The object of the exploration is for the user to understand the constraints that govern the original plan and, ultimately, to arrive at a satisfactory plan. We present the results of a user study that demonstrates that when users ask questions about plans, those questions are usually contrastive, i.e. “why A rather than B?”. We use the data from this study to construct a taxonomy of user questions that often arise during plan exploration. Our approach to iterative plan exploration is a process of successive model restriction. Each contrastive user question imposes a set of constraints on the planning problem, leading to the construction of a new hypothetical planning problem as a restriction of the original. Solving this restricted problem results in a plan that can be compared with the original plan, admitting a contrastive explanation. We formally define model-based compilations in PDDL2.1 for each type of constraint derived from a contrastive user question in the taxonomy, and empirically evaluate the compilations in terms of computational complexity. The compilations were implemented as part of an explanation framework supporting iterative model restriction. We demonstrate its benefits in a second user study.


Sen. Marsha Blackburn: Biden can end our border crisis. My plan will kick start the solution

FOX News

Sen. Marsha Blackburn, R-Tenn., on introducing legislation aimed at increasing national security. During his first press conference as president, Joe Biden claimed that border officials were "sending back the vast majority of the families" arriving at our southern border. Unfortunately, as every outlet that covered the issue has since reported, Biden's statement has proven false. Before he took the oath of office, then-candidate Biden was on the campaign trail promising amnesty and open borders. Now, Biden's refusal to enforce the law is allowing thousands of migrants to cross the border illegally every day, even as overworked Customs and Border Patrol agents collect, transport, and process thousands more.


Recognizing LTLf/PLTLf Goals in Fully Observable Non-Deterministic Domain Models

Pereira, Ramon Fraga, Fuggitti, Francesco, De Giacomo, Giuseppe

arXiv.org Artificial Intelligence

Goal Recognition is the task of discerning the correct intended goal that an agent aims to achieve, given a set of possible goals, a domain model, and a sequence of observations as a sample of the plan being executed in the environment. Existing approaches assume that the possible goals are formalized as a conjunction in deterministic settings. In this paper, we develop a novel approach that is capable of recognizing temporally extended goals in Fully Observable Non-Deterministic (FOND) planning domain models, focusing on goals on finite traces expressed in Linear Temporal Logic (LTLf) and (Pure) Past Linear Temporal Logic (PLTLf). We empirically evaluate our goal recognition approach using different LTLf and PLTLf goals over six common FOND planning domain models, and show that our approach is accurate to recognize temporally extended goals at several levels of observability.


Goal recognition via model-based and model-free techniques

Borrajo, Daniel, Gopalakrishnan, Sriram, Potluru, Vamsi K.

arXiv.org Artificial Intelligence

Humans interact with the world based on their inner motivations (goals) by performing actions. Those actions might be observable by financial institutions. In turn, financial institutions might log all these observed actions for better understanding human behavior. Examples of such interactions are investment operations (buying or selling options), account-related activities (creating accounts, making transactions, withdrawing money), digital interactions (utilizing the bank's web or mobile app for configuring alerts, or applying for a new credit card), or even illicit operations (such as fraud or money laundering). Once human behavior can be better understood, financial institutions can improve their processes allowing them to deepen the relationship with clients, offering targeted services (marketing), handling complaints-related interactions (operations), or performing fraud or money laundering investigations (compliance) [Borrajo et al., 2020].


Norm Identification through Plan Recognition

Oren, Nir, Meneguzzi, Felipe

arXiv.org Artificial Intelligence

Societal rules, as exemplified by norms, aim to provide a degree of behavioural stability to multi-agent societies. Norms regulate a society using the deontic concepts of permissions, obligations and prohibitions to specify what can, must and must not occur in a society. Many implementations of normative systems assume various combinations of the following assumptions: that the set of norms is static and defined at design time; that agents joining a society are instantly informed of the complete set of norms; that the set of agents within a society does not change; and that all agents are aware of the existing norms. When any one of these assumptions is dropped, agents need a mechanism to identify the set of norms currently present within a society, or risk unwittingly violating the norms. In this paper, we develop a norm identification mechanism that uses a combination of parsing-based plan recognition and Hierarchical Task Network (HTN) planning mechanisms, which operates by analysing the actions performed by other agents. While our basic mechanism cannot learn in situations where norm violations take place, we describe an extension which is able to operate in the presence of violations.