Oren, Nir


Landmark-Based Approaches for Goal Recognition as Planning

arXiv.org Artificial Intelligence

The task of recognizing goals and plans from missing and full observations can be done efficiently by using automated planning techniques. In many applications, it is important to recognize goals and plans not only accurately, but also quickly. To address this challenge, we develop novel goal recognition approaches based on planning techniques that rely on planning landmarks. In automated planning, landmarks are properties (or actions) that cannot be avoided to achieve a goal. We show the applicability of a number of planning techniques with an emphasis on landmarks for goal and plan recognition tasks in two settings: (1) we use the concept of landmarks to develop goal recognition heuristics; and (2) we develop a landmark-based filtering method to refine existing planning-based goal and plan recognition approaches. These recognition approaches are empirically evaluated in experiments over several classical planning domains. We show that our goal recognition approaches yield not only accuracy comparable to (and often higher than) other state-of-the-art techniques, but also substantially faster recognition time over such techniques.


Using Sub-Optimal Plan Detection to Identify Commitment Abandonment in Discrete Environments

arXiv.org Artificial Intelligence

Assessing whether an agent has abandoned a goal or is actively pursuing it is important when multiple agents are trying to achieve joint goals, or when agents commit to achieving goals for each other. Making such a determination for a single goal by observing only plan traces is not trivial as agents often deviate from optimal plans for various reasons, including the pursuit of multiple goals or the inability to act optimally. In this article, we develop an approach based on domain independent heuristics from automated planning, landmarks, and fact partitions to identify sub-optimal action steps - with respect to a plan - within a plan execution trace. Such capability is very important in domains where multiple agents cooperate and delegate tasks among themselves, e.g. through social commitments, and need to ensure that a delegating agent can infer whether or not another agent is actually progressing towards a delegated task. We demonstrate how an agent can use our technique to determine - by observing a trace - whether an agent is honouring a commitment. We empirically show, for a number of representative domains, that our approach infers sub-optimal action steps with very high accuracy and detects commitment abandonment in nearly all cases.


Landmark-Based Heuristics for Goal Recognition

AAAI Conferences

Automated planning can be used to efficiently recognize goals and plans from partial or full observed action sequences. In this paper, we propose goal recognition heuristics that rely on information from planning landmarks - facts or actions that must occur if a plan is to achieve a goal when starting from some initial state. We develop two such heuristics: the first estimates goal completion by considering the ratio between achieved and extracted landmarks of a candidate goal, while the second takes into account how unique each landmark is among landmarks for all candidate goals. We empirically evaluate these heuristics over both standard goal/plan recognition problems, and a set of very large problems. We show that our heuristics can recognize goals more accurately, and run orders of magnitude faster, than the current state-of-the-art.


Monitoring Plan Optimality Using Landmarks and Domain-Independent Heuristics

AAAI Conferences

When acting, agents may deviate from the optimal plan, either because they are not perfect optimizers or because they interleave multiple unrelated tasks. In this paper, we detect such deviations by analyzing a set of observations and a monitored goal to determine if an observed agent's actions contribute towards achieving the goal. We address this problem without pre-defined static plan libraries, and instead use a planning domain definition to represent the problem and the expected agent behavior. At the core of our approach, we exploit domain-independent heuristics for estimating the goal distance, incorporating the concept of landmarks (actions which all plans must undertake if they are to achieve the goal). We evaluate the resulting approach empirically using several known planning domains, and demonstrate that our approach effectively detects such deviations.


Markov Argumentation Random Fields

AAAI Conferences

We demonstrate an implementation of Markov Argumentation Random Fields (MARFs), a novel formalism combining elements of formal argumentation theory and probabilistic graphical models. In doing so MARFs provide a principled technique for the merger of probabilistic graphical models and non-monotonic reasoning, supporting human reasoning in ``messy’’ domains where the knowledge about conflicts should be applied. Our implementation takes the form of a graphical tool which supports users in interpreting complex information. We have evaluated our implementation in the domain of intelligence analysis, where analysts must reason and determine likelihoods of events using information obtained from conflicting sources.


Summary Report of The First International Competition on Computational Models of Argumentation

AI Magazine

We review the First International Competition on Computational Models of Argumentation (ICMMA'15). The competition evaluated submitted solvers performance on four different computational tasks related to solving abstract argumentation frameworks. Each task evaluated solvers in ways that pushed the edge of existing performance by introducing new challenges. Despite being the first competition in this area, the high number of competitors entered, and differences in results, suggest that the competition will help shape the landscape of ongoing developments in argumentation theory solvers.


Summary Report of The First International Competition on Computational Models of Argumentation

AI Magazine

We review the First International Competition on Computational Models of Argumentation (ICMMA’15). The competition evaluated submitted solvers performance on four different computational tasks related to solving abstract argumentation frameworks. Each task evaluated solvers in ways that pushed the edge of existing performance by introducing new challenges. Despite being the first competition in this area, the high number of competitors entered, and differences in results, suggest that the competition will help shape the landscape of ongoing developments in argumentation theory solvers.


Opponent Models with Uncertainty for Strategic Argumentation

AAAI Conferences

This paper deals with the issue of strategic argumentation in the setting of Dung-style abstract argumentation theory. Such reasoning takes place through the use of opponent models—recursive representations of an agent’s knowledge and beliefs regarding the opponent’s knowledge. Using such models, we present three approaches to reasoning. The first directly utilises the opponent model to identify the best move to advance in a dialogue. The second extends our basic approach through the use of quantitative uncertainty over the opponent’s model. The final extension introduces virtual arguments into the opponent’s reasoning process. Such arguments are unknown to the agent, but presumed to exist and interact with known arguments. They are therefore used to add a primitive notion of risk to the agent’s reasoning. We have implemented our models and we have performed an empirical analysis that shows that this added expressivity improves the performance of an agent in a dialogue.