Gal, Ya'
Comparing Plan Recognition Algorithms through Standard Libraries
Mirsky, Reuth (Ben-Gurion University of the Negev) | Galun, Ran (Ben-Gurion University of the Negev) | Gal, Ya' (Ben-Gurion University of the Negev) | akov (Bar-Ilan University) | Kaminka, Gal
Plan recognition is one of the fundamental problems of AI, applicable to many domains, from user interfaces to cyber security. We focus on a class of algorithms that use plan libraries as input to the recognition process. Despite the prevalence of these approaches, they lack a standard representation, and have not been compared to each other on common test bed. This paper directly addresses this gap by providing a standard plan library representation and evaluation criteria to consider. Our representation is comprehensive enough to describe a variety of known plan recognition problems, yet it can be easily applied to existing algorithms, which can be evaluated using our defined criteria. We demonstrate this technique on two known algorithms, SBR and PHATT. We provide meaningful insights both about the differences and abilities of the algorithms. We show that SBR is superior to PHATT both in terms of computation time and space, but at the expense of functionality and compact representation. We also show that depth is the single feature of a plan library that increases the complexity of the recognition, regardless of the algorithm used.
Plan Recognition Design
Mirsky, Reuth (Ben-Gurion University of the Negev) | Stern, Roni (Ben-Gurion University of the Negev) | Gal, Ya' (Ben-Gurion University of the Negev) | akov (Kobi) (Ben-Gurion University of the Negev) | Kalech, Meir
Goal Recognition Design (GRD) is the problem of designing a domain in a way that will allow easy identification of agents' goals. This work extends the original GRD problem to the Plan Recognition Design (PRD) problem which is the task of designing a domain using plan libraries in order to facilitate fast identification of an agent's plan. While GRD can help to explain faster which goal the agent is trying to achieve, PRD can help in faster understanding of how the agent is going to achieve its goal. We define a new measure that quantifies the worst-case distinctiveness of a given planning domain, propose a method to reduce it in a given domain and show the reduction of this new measure in three domains from the literature.
Plan Recognition Design
Mirsky, Reuth (Ben-Gurion University of the Negev) | Stern, Roni (Ben-Gurion University of the Negev) | Gal, Ya' (Ben-Gurion University of the Negev) | akov (Ben-Gurion University of the Negev) | Kalech, Meir
Goal Recognition Design (GRD) is the problem of designing a domain in a way that will allow easy identification of agents' goals. This work extends the original GRD problem to the Plan Recognition Design (PRD) problem which is the task of designing a domain using plan libraries in order to facilitate fast identification of an agent's plan. While GRD can help to explain faster which goal the agent is trying to achieve, PRD can help in faster understanding of how the agent is going to achieve its goal. we define a new measure that quantifies the worst-case distinctiveness of a given planning domain, propose a method to reduce it in a given domain and show the reduction of this new measure in three domains from the literature.
Advice Provision for Choice Selection Processes with Ranked Options
Azaria, Amos (Bar-Ilan University) | Gal, Ya' (Ben Gurion University) | akov (General Motors Advanced Technical Center) | Goldman, Claudia V. (Bar Ilan University) | Kraus, Sarit
Choice selection processes are a family of bilateral games of incomplete information in which a computer agent generates advice for a human user while considering the effect of the advice on the user's behavior in future interactions. The human and the agent may share certain goals, but are essentially self-interested. This paper extends selection processes to settings in which the actions available to the human are ordered and thus the user may be influenced by the advice even though he doesn't necessarily follow it exactly. In this work we also consider the case in which the user obtains some observation on the sate of the world. We propose several approaches to model human decision making in such settings. We incorporate these models into two optimization techniques for the agent advice provision strategy. In the first one the agent used a social utility approach which considered the benefits and costs for both agent and person when making suggestions. In the second approach we simplified the human model in order to allow modeling and solving the agent strategy as an MDP. In an empirical evaluation involving human users on AMT, we showed that the social utility approach significantly outperformed the MDP approach.
An Agent Design for Repeated Negotiation and Information Revelation with People
Peled, Noam (Bar Ilan University) | Gal, Ya' (Ben-Gurion University) | akov (Kobi) (Bar Ilan University) | Kraus, Sarit
Many negotiations in the real world are characterized by incomplete information, and participants' success depends on their ability to reveal information in a way that facilitates agreement without compromising the individual gains of agents. This paper presents a novel agent design for repeated negotiation in incomplete information settings that learns to reveal information strategically during the negotiation process. The agent used classical machine learning techniques to predict how people make and respond to offers during the negotiation, how they reveal information and their response to potential revelation actions by the agent. The agent was evaluated empirically in an extensive empirical study spanning hundreds of human subjects. Results show that the agent was able to outperform people. In particular, it learned (1) to make offers that were beneficial to people while not compromising its own benefit; (2) to incrementally reveal information to people in a way that increased its expected performance. The approach generalizes to new settings without the need to acquire additional data. This work demonstrates the efficacy of combining machine learning with opponent modeling techniques towards the design of computer agents for negotiating with people in settings of incomplete information.
Strategic Advice Provision in Repeated Human-Agent Interactions (Abstract)
Azaria, Amos (Bar Ilan University) | Rabinovich, Zinovi (Bar Ilan University) | Kraus, Sarit (Bar Ilan University) | Goldman, Claudia V. (General Motors) | Gal, Ya' (Ben-Gurion University of the Negev) | akov
This paper addresses the problem of automated advice provision in settings that involve repeated interactions between people and computer agents. This problem arises in many real world applications such as route selection systems and office assistants. To succeed in such settings agents must reason about how their actions in the present influence people's future actions. The paper describes several possible models of human behavior that were inspired by behavioral economic theories of people's play in repeated interactions. These models were incorporated into several agent designs to repeatedly generate offers to people playing the game. These agents were evaluated in extensive empirical investigations including hundreds of subjects that interacted with computers in different choice selections processes. The results revealed that an agent that combined a hyperbolic discounting model of human behavior with a social utility function was able to outperform alternative agent designs. We show that this approach was able to generalize to new people as well as choice selection processes that were not used for training. Our results demonstrate that combining computational approaches with behavioral economics models of people in repeated interactions facilitates the design of advice provision strategies for a large class of real-world settings.
Strategic Advice Provision in Repeated Human-Agent Interactions
Azaria, Amos (Bar Ilan University) | Rabinovich, Zinovi (Bar Ilan University) | Kraus, Sarit (Bar Ilan University) | Goldman, Claudia V. (General Motors) | Gal, Ya' (Ben Gurion University) | akov
This paper addresses the problem of automated advice provision in settings that involve repeated interactions between people and computer agents. This problem arises in many real world applications such as route selection systems and office assistants. To succeed in such settings agents must reason about how their actions in the present influence people's future actions. This work models such settings as a family of repeated bilateral games of incomplete information called ``choice selection processes'', in which players may share certain goals, but are essentially self-interested. The paper describes several possible models of human behavior that were inspired by behavioral economic theories of people's play in repeated interactions. These models were incorporated into several agent designs to repeatedly generate offers to people playing the game. These agents were evaluated in extensive empirical investigations including hundreds of subjects that interacted with computers in different choice selections processes. The results revealed that an agent that combined a hyperbolic discounting model of human behavior with a social utility function was able to outperform alternative agent designs, including an agent that approximated the optimal strategy using continuous MDPs and an agent using epsilon-greedy strategies to describe people's behavior. We show that this approach was able to generalize to new people as well as choice selection processes that were not used for training. Our results demonstrate that combining computational approaches with behavioral economics models of people in repeated interactions facilitates the design of advice provision strategies for a large class of real-world settings.
Facilitating the Evaluation of Automated Negotiators using Peer Designed Agents
Lin, Raz (Bar-Ilan University) | Kraus, Sarit (Bar-Ilan University) | Oshrat, Yinon (Bar-Ilan University) | Gal, Ya' (Ben-Gurion University of the Negev) | akov (Kobi)
Computer agents are increasingly deployed in settings in which they make decisions with people, such as electronic commerce, collaborative interfaces, and cognitive assistants. However, the scientific evaluation of computational strategies for human-computer decision-making is a costly process, involving time, effort and personnel. This paper investigates the use of Peer Designed Agents (PDA) — computer agents developed by human subjects — as a tool for facilitating the evaluation process of automatic negotiators that were developed by researchers. It compared the performance between automatic negotiators that interacted with PDAs to automatic negotiators that interacted with actual people in different domains. The experiments included more than 300 human subjects and 50 PDAs developed by students. Results showed that the automatic negotiators outperformed PDAs in the same situations in which they outperformed people, and that on average, they exhibited the same measure of generosity towards their negotiation partners. These patterns were significant for all types of domains, and for all types of automated negotiators, despite the fact that there were individual differences between the behavior of PDAs and people. The study thus provides an empirical proof that PDAs can alleviate the evaluation process of automatic negotiators, and facilitate their design.