Well File:

 akov


Comparing Plan Recognition Algorithms through Standard Libraries

AAAI Conferences

Plan recognition isย  one of the fundamental problems of AI, applicable to many domains, from user interfaces to cyber security. We focus on a class of algorithms that use plan libraries as input to the recognition process. Despite the prevalence of these approaches, they lack a standard representation, and have not been compared to each other on common test bed. This paper directly addresses this gap by providing a standard plan library representation and evaluation criteria to consider. Our representation is comprehensive enough to describe a variety of known plan recognition problems, yet it can be easily applied to existing algorithms, which can be evaluated using our defined criteria. We demonstrate this technique on two known algorithms, SBR and PHATT. We provide meaningful insights both about the differences and abilities of the algorithms. We show that SBR is superior to PHATT both in terms of computation time and space, but at the expense of functionality and compact representation. We also show that depth is the single feature of a plan library that increases the complexity of the recognition, regardless of the algorithm used.


Plan Recognition Design

AAAI Conferences

Goal Recognition Design (GRD) is the problem of designing a domain in a way that will allow easy identification of agents' goals. This work extends the original GRD problem to the Plan Recognition Design (PRD) problem which is the task of designing a domain using plan libraries in order to facilitate fast identification of an agent's plan. While GRD can help to explain faster which goal the agent is trying to achieve, PRD can help in faster understanding of how the agent is going to achieve its goal. we define a new measure that quantifies the worst-case distinctiveness of a given planning domain, propose a method to reduce it in a given domain and show the reduction of this new measure in three domains from the literature.


Advice Provision for Choice Selection Processes with Ranked Options

AAAI Conferences

Choice selection processes are a family of bilateral games of incomplete information in which a computer agent generates advice for a human user while considering the effect of the advice on the user's behavior in future interactions. The human and the agent may share certain goals, but are essentially self-interested. This paper extends selection processes to settings in which the actions available to the human are ordered and thus the user may be influenced by the advice even though he doesn't necessarily follow it exactly. In this work we also consider the case in which the user obtains some observation on the sate of the world. We propose several approaches to model human decision making in such settings. We incorporate these models into two optimization techniques for the agent advice provision strategy. In the first one the agent used a social utility approach which considered the benefits and costs for both agent and person when making suggestions. In the second approach we simplified the human model in order to allow modeling and solving the agent strategy as an MDP. In an empirical evaluation involving human users on AMT, we showed that the social utility approach significantly outperformed the MDP approach.


Strategic Advice Provision in Repeated Human-Agent Interactions (Abstract)

AAAI Conferences

This paper addresses the problem of automated advice provision in settings that involve repeated interactions between people and computer agents. This problem arises in many real world applications such as route selection systems and office assistants. To succeed in such settings agents must reason about how their actions in the present influence people's future actions. The paper describes several possible models of human behavior that were inspired by behavioral economic theories of people's play in repeated interactions. These models were incorporated into several agent designs to repeatedly generate offers to people playing the game. These agents were evaluated in extensive empirical investigations including hundreds of subjects that interacted with computers in different choice selections processes. The results revealed that an agent that combined a hyperbolic discounting model of human behavior with a social utility function was able to outperform alternative agent designs. We show that this approach was able to generalize to new people as well as choice selection processes that were not used for training. Our results demonstrate that combining computational approaches with behavioral economics models of people in repeated interactions facilitates the design of advice provision strategies for a large class of real-world settings.


Strategic Advice Provision in Repeated Human-Agent Interactions

AAAI Conferences

This paper addresses the problem of automated advice provision in settings that involve repeated interactions between people and computer agents. This problem arises in many real world applications such as route selection systems and office assistants. To succeed in such settings agents must reason about how their actions in the present influence people's future actions. This work models such settings as a family of repeated bilateral games of incomplete information called ``choice selection processes'', in which players may share certain goals, but are essentially self-interested. The paper describes several possible models of human behavior that were inspired by behavioral economic theories of people's play in repeated interactions. These models were incorporated into several agent designs to repeatedly generate offers to people playing the game. These agents were evaluated in extensive empirical investigations including hundreds of subjects that interacted with computers in different choice selections processes. The results revealed that an agent that combined a hyperbolic discounting model of human behavior with a social utility function was able to outperform alternative agent designs, including an agent that approximated the optimal strategy using continuous MDPs and an agent using epsilon-greedy strategies to describe people's behavior. We show that this approach was able to generalize to new people as well as choice selection processes that were not used for training. Our results demonstrate that combining computational approaches with behavioral economics models of people in repeated interactions facilitates the design of advice provision strategies for a large class of real-world settings.