Goto

Collaborating Authors

Molineaux, Matthew


Towards Explainable NPCs: A Relational Exploration Learning Agent

AAAI Conferences

Non-player characters (NPCs) in video games are a common form of frustration for players because they generally provide no explanations for their actions or provide simplistic explanations using fixed scripts. Motivated by this, we consider a new design for agents that can learn about their environments, accomplish a range of goals, and explain what they are doing to a supervisor. We propose a framework for studying this type of agent, and compare it to existing reinforcement learning and self-motivated agent frameworks. We propose a novel design for an initial agent that acts within this framework. Finally, we describe an evaluation centered around the supervisor's satisfaction and understanding of the agent's behavior.


Molineaux

AAAI Conferences

Non-player characters (NPCs) in video games are a common form of frustration for players because they generally provide no explanations for their actions or provide simplistic explanations using fixed scripts. Motivated by this, we consider a new design for agents that can learn about their environments, accomplish a range of goals, and explain what they are doing to a supervisor. We propose a framework for studying this type of agent, and compare it to existing reinforcement learning and self-motivated agent frameworks. We propose a novel design for an initial agent that acts within this framework. Finally, we describe an evaluation centered around the supervisor's satisfaction and understanding of the agent's behavior.


Molineaux

AAAI Conferences

Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.


Human-Agent Teaming as a Common Problem for Goal Reasoning

AAAI Conferences

Human-agent teaming is a difficult yet relevant problem domain to which many goal reasoning systems are well suited, due to their ability to accept outside direction and (relatively) human-understandable internal state. We propose a formal model, and multiple variations on a multi-agent problem, to clarify and unify research in goal reasoning. We describe examples of these concepts, and propose standard evaluation methods for goal reasoning agents that act as a member of a team or on behalf of a supervisor.


Towards Deception Detection in a Language-Driven Game

AAAI Conferences

There are many real-world scenarios where agents must reliably detect deceit to make decisions. When deceitful statements are made, other statements or actions may make it possible to uncover the deceit. We describe a goal reasoning agent architecture that supports deceit detection by hypothesizing about an agent’s actions, uses new observations to revise past beliefs, and recognizes the plans and goals of other agents. In this paper, we focus on one module of our architecture, the Explanation Generator, and describe how it can generate hypotheses for a most probable truth scenario despite the presence of false information. We demonstrate its use in a multiplayer tabletop social deception game, One Night Ultimate Werewolf.


Hancock

AAAI Conferences

There are many real-world scenarios where agents must reliably detect deceit to make decisions. When deceitful statements are made, other statements or actions may make it possible to uncover the deceit. We describe a goal reasoning agent architecture that supports deceit detection by hypothesizing about an agent's actions, uses new observations to revise past beliefs, and recognizes the plans and goals of other agents. In this paper, we focus on one module of our architecture, the Explanation Generator, and describe how it can generate hypotheses for a most probable truth scenario despite the presence of false information. We demonstrate its use in a multiplayer tabletop social deception game, One Night Ultimate Werewolf.


Learning Unknown Event Models

AAAI Conferences

Agents with incomplete environment models are likely to be surprised, and this represents an opportunity to learn. We investigate approaches for situated agents to detect surprises, discriminate among different forms of surprise, and hypothesize new models for the unknown events that surprised them. We instantiate these approaches in a new goal reasoning agent (named FoolMeTwice), investigate its performance in simulation studies, and report that it produces plans with significantly reduced execution cost in comparison to not learning models for surprising events.


Wilson

AAAI Conferences

Goal-driven autonomy is a framework for intelligent agents that automatically formulate and manage goals in dynamic environments, where goal formulation is the task of identifying goals that the agent should attempt to achieve. We argue that goal formulation is central to high-level autonomy, and explain why identifying domain-independent heuristics for this task is an important research topic in high-level control. We describe two novel domain-independent heuristics for goal formulation (motivators) that evaluate the utility of goals based on the projected consequences of achieving them. We then describe their integration in M-ARTUE, an agent that balances the satisfaction of internal needs with the achievement of goals introduced externally. We assess its performance in a series of experiments in the Rovers With Compass domain. Our results show that using domain-independent heuristics yields performance comparable to using domain-specific knowledge for goal formulation. Finally, in ablation studies we demonstrate that each motivator contributes significantly to M-ARTUE's performance.


Domain-Independent Heuristics for Goal Formulation

AAAI Conferences

Goal-driven autonomy is a framework for intelligent agents that automatically formulate and manage goals in dynamic environments, where goal formulation is the task of identifying goals that the agent should attempt to achieve. We argue that goal formulation is central to high-level autonomy, and explain why identifying domain-independent heuristics for this task is an important research topic in high-level control. We describe two novel domain-independent heuristics for goal formulation (motivators) that evaluate the utility of goals based on the projected consequences of achieving them. We then describe their integration in M-ARTUE, an agent that balances the satisfaction of internal needs with the achievement of goals introduced externally. We assess its performance in a series of experiments in the Rovers With Compass domain. Our results show that using domain-independent heuristics yields performance comparable to using domain-specific knowledge for goal formulation. Finally, in ablation studies we demonstrate that each motivator contributes significantly to M-ARTUE’s performance.


Powell

AAAI Conferences

If given manually-crafted goal selection knowledge, goal reasoning agents can dynamically determine which goals they should achieve in complex environments. These agents should instead learn goal selection knowledge through expert interaction. We describe T-ARTUE, a goal reasoning agent that performs case-based active and interactive learning to discover goal selection knowledge. We also report tests of its performance in a complex environment. We found that, under some conditions, T-ARTUE can quickly learn goal selection knowledge.