Goto

Collaborating Authors

Results


Goal recognition via model-based and model-free techniques

arXiv.org Artificial Intelligence

Humans interact with the world based on their inner motivations (goals) by performing actions. Those actions might be observable by financial institutions. In turn, financial institutions might log all these observed actions for better understanding human behavior. Examples of such interactions are investment operations (buying or selling options), account-related activities (creating accounts, making transactions, withdrawing money), digital interactions (utilizing the bank's web or mobile app for configuring alerts, or applying for a new credit card), or even illicit operations (such as fraud or money laundering). Once human behavior can be better understood, financial institutions can improve their processes allowing them to deepen the relationship with clients, offering targeted services (marketing), handling complaints-related interactions (operations), or performing fraud or money laundering investigations (compliance) [Borrajo et al., 2020].


Norm Identification through Plan Recognition

arXiv.org Artificial Intelligence

Societal rules, as exemplified by norms, aim to provide a degree of behavioural stability to multi-agent societies. Norms regulate a society using the deontic concepts of permissions, obligations and prohibitions to specify what can, must and must not occur in a society. Many implementations of normative systems assume various combinations of the following assumptions: that the set of norms is static and defined at design time; that agents joining a society are instantly informed of the complete set of norms; that the set of agents within a society does not change; and that all agents are aware of the existing norms. When any one of these assumptions is dropped, agents need a mechanism to identify the set of norms currently present within a society, or risk unwittingly violating the norms. In this paper, we develop a norm identification mechanism that uses a combination of parsing-based plan recognition and Hierarchical Task Network (HTN) planning mechanisms, which operates by analysing the actions performed by other agents. While our basic mechanism cannot learn in situations where norm violations take place, we describe an extension which is able to operate in the presence of violations.


A Transfer Learning Method for Goal Recognition Exploiting Cross-Domain Spatial Features

arXiv.org Artificial Intelligence

The ability to infer the intentions of others, predict their goals, and deduce their plans are critical features for intelligent agents. For a long time, several approaches investigated the use of symbolic representations and inferences with limited success, principally because it is difficult to capture the cognitive knowledge behind human decisions explicitly. The trend, nowadays, is increasingly focusing on learning to infer intentions directly from data, using deep learning in particular. We are now observing interesting applications of intent classification in natural language processing, visual activity recognition, and emerging approaches in other domains. This paper discusses a novel approach combining few-shot and transfer learning with cross-domain features, to learn to infer the intent of an agent navigating in physical environments, executing arbitrary long sequences of actions to achieve their goals. Experiments in synthetic environments demonstrate improved performance in terms of learning from few samples and generalizing to unseen configurations, compared to a deep-learning baseline approach.


Active Goal Recognition

arXiv.org Artificial Intelligence

To coordinate with other systems, agents must be able to determine what the systems are currently doing and predict what they will be doing in the future---plan and goal recognition. There are many methods for plan and goal recognition, but they assume a passive observer that continually monitors the target system. Real-world domains, where information gathering has a cost (e.g., moving a camera or a robot, or time taken away from another task), will often require a more active observer. We propose to combine goal recognition with other observer tasks in order to obtain \emph{active goal recognition} (AGR). We discuss this problem and provide a model and preliminary experimental results for one form of this composite problem. As expected, the results show that optimal behavior in AGR problems balance information gathering with other actions (e.g., task completion) such as to achieve all tasks jointly and efficiently. We hope that our formulation opens the door for extensive further research on this interesting and realistic problem.


Balancing Goal Obfuscation and Goal Legibility in Settings with Cooperative and Adversarial Observers

arXiv.org Artificial Intelligence

In order to be useful in the real world, AI agents need to plan and act in the presence of others, who may include adversarial and cooperative entities. In this paper, we consider the problem where an autonomous agent needs to act in a manner that clarifies its objectives to cooperative entities while preventing adversarial entities from inferring those objectives. We show that this problem is solvable when cooperative entities and adversarial entities use different types of sensors and/or prior knowledge. We develop two new solution approaches for computing such plans. One approach provides an optimal solution to the problem by using an IP solver to provide maximum obfuscation for adversarial entities while providing maximum legibility for cooperative entities in the environment, whereas the other approach provides a satisficing solution using heuristic-guided forward search to achieve preset levels of obfuscation and legibility for adversarial and cooperative entities respectively. We show the feasibility and utility of our algorithms through extensive empirical evaluation on problems derived from planning benchmarks.


Integration of Planning with Recognition for Responsive Interaction Using Classical Planners

AAAI Conferences

Interaction between multiple agents requires some form of coordination and a level of mutual awareness. When computers and robots interact with people, they need to recognize human plans and react appropriately. Plan and goal recognition techniques have focused on identifying an agent's task given a sufficiently long action sequence. However, by the time the plan and/or goal are recognized, it may be too late for computing an interactive response. We propose an integration of planning with probabilistic recognition where each method uses intermediate results from the other as a guiding heuristic for recognition of the plan/goal in-progress as well as the interactive response. We show that, like the used recognition method, these interaction problems can be compiled into classical planning problems and solved using off-the-shelf methods. In addition to the methodology, this paper introduces problem categories for different forms of interaction, an evaluation metric for the benefits from the interaction, and extensions to the recognition algorithm that make its intermediate results more practical while the plan is in progress.


An AI Planning-Based Approach to the Multi-Agent Plan Recognition Problem (Preliminary Report)

AAAI Conferences

Plan Recognition is the problem of inferring the goals and plans of an agent given a set of observations. In Multi-Agent Plan Recognition (MAPR) the task is extended to inferring the goals and plans of multiple agents. Previous MAPR approaches have largely focused on recognizing team structures and behaviors, given perfect and complete observations of the actions of individual agents. However, in many real-world applications of MAPR, observations are unreliable or missing; they are often over properties of the world rather than actions; and the observations that are made may not be explainable by the agents' goals and plans. Moreover, the actions of the agents could be durative or concurrent. In this paper, we address the problem of MAPR with temporal actions and with observations that can be unreliable, missing or unexplainable. To this end, we propose a multi-step compilation technique that enables the use of AI planning for the computation of the posterior probabilities of the possible goals. In addition, we propose a set of novel benchmarks that enable a standard evaluation of solutions that address the MAPR problem with temporal actions and such observations. We present results of an experimental evaluation on this set of benchmarks, using several temporal and diverse planners.


Belief and Truth in Hypothesised Behaviours

arXiv.org Artificial Intelligence

There is a long history in game theory on the topic of Bayesian or "rational" learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system composed of multiple agents with unknown behaviours. The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents. The game theory literature studies this idea primarily in the context of equilibrium attainment. In contrast, many AI applications have a focus on task completion and payoff maximisation. With this perspective in mind, we identify and address a spectrum of questions pertaining to belief and truth in hypothesised types. We formulate three basic ways to incorporate evidence into posterior beliefs and show when the resulting beliefs are correct, and when they may fail to be correct. Moreover, we demonstrate that prior beliefs can have a significant impact on our ability to maximise payoffs in the long-term, and that they can be computed automatically with consistent performance effects. Furthermore, we analyse the conditions under which we are able complete our task optimally, despite inaccuracies in the hypothesised types. Finally, we show how the correctness of hypothesised types can be ascertained during the interaction via an automated statistical analysis.


Architectures for Activity Recognition and Context-Aware Computing

AI Magazine

The last 10 years have seen the development of novel architectures and technologies for domainfocused, task-specific systems that know many things, such as who (identities, profile, history) they are with (social context) and in what role (responsibility, security, privacy); when and where (event, time, place); why (goals, shared or personal); how are they doing it (methods, applications); and using what resources (device, services, access, and ownership). Smart spaces and devices will increasingly use such contextual knowledge to help users move seamlessly between devices and applications, without having to explicitly carry, transfer, and exchange activity context. Such systems will qualitatively shift our lives both at work and play and significantly change our interactions both with our physical and virtual worlds. This dream of seamlessly interacting with our virtual environment has a long history as can be seen in Apple Inc.'s Knowledge Navigator 1987 concept video. However, the combination of dramatic progress in low-power mobile computing devices and sensors, with advances in artificial intelligence and human-computer interaction (HCI) in the last decade, have provided the kind of platforms and algorithms that are enabling context-aware virtual personal assistants that plan activities and recognize intent. This has lead to an increase in work designed to bring these ideas into real world application and address the final technical hurdles that will make such systems a reality.


Toward Narrative Schema-Based Goal Recognition Models for Interactive Narrative Environments

AAAI Conferences

Computational models for goal recognition hold great promise for enhancing the capabilities of drama managers and director agents for interactive narratives. The problem of goal recognition, and its more general form, plan recognition, have been the subjects of extensive investigation in the AI community. However, relatively little effort has been undertaken to examine goal recognition in interactive narrative. In this paper, we propose a research agenda to improve the accuracy of goal recognition models for interactive narratives using explicit representations of narrative structure inspired by the natural language processing community. We describe a particular category of narrative representations, narrative schemas, that we anticipate will effectively capture patterns of player behavior in interactive narratives and improve the accuracy of goal recognition models.