Not enough data to create a plot.
Try a different view from the menu above.
Meadows, Ben
What Can I Not Do? Towards an Architecture for Reasoning about and Learning Affordances
Sridharan, Mohan (The University of Auckland) | Meadows, Ben (The University of Auckland) | Gomez, Rocio (The University of Auckland)
This paper describes an architecture for an agent to learn and reason about affordances. In this architecture, Answer Set Prolog, a declarative language, is used to represent and reason with incomplete domain knowledge that includes a representation of affordances as relations defined jointly over objects and actions. Reinforcement learning and decision-tree induction based on this relational representation and observations of action outcomes, are used to interactively and cumulatively (a) acquire knowledge of affordances of specific objects being operated upon by specific agents; and (b) generalize from these specific learned instances. The capabilities of this architecture are illustrated and evaluated in two simulated domains, a variant of the classic Blocks World domain, and a robot assisting humans in an office environment.
Explainable Agency for Intelligent Autonomous Systems
Langley, Pat (University of Auckland) | Meadows, Ben (University of Auckland) | Sridharan, Mohan (University of Auckland) | Choi, Dongkyu (University of Kansas)
As intelligent agents become more autonomous, sophisticated, and prevalent, it becomes increasingly important that humans interact with them effectively. Machine learning is now used regularly to acquire expertise, but common techniques produce opaque content whose behavior is difficult to interpret. Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices. We will refer to this general ability as explainable agency. This capacity for explaining decisions is not an academic exercise. When a self-driving vehicle takes an unfamiliar turn, its passenger may desire to know its reasons. When a synthetic ally in a computer game blocks a player's path, he may want to understand its purpose. When an autonomous military robot has abandoned a high-priority goal to pursue another one, its commander may request justification. As robots, vehicles, and synthetic characters become more self-reliant, people will require that they explain their behaviors on demand. The more impressive these agents' abilities, the more essential that we be able to understand them.
Social Planning: Achieving Goals by Altering Others' Mental States
Pearce, Chris (University of Auckland) | Meadows, Ben (University of Auckland) | Langley, Pat (University of Auckland) | Barley, Mike (University of Auckland)
In this paper, we discuss a computational approach to the cognitivetask of social planning. First, we specify a class of planningproblems that involve an agent who attempts to achieve its goalsby altering other agents' mental states. Next, we describe SFPS,a flexible problem solver that generates social plans of this sort,including ones that include deception and reasoning about otheragents' beliefs. We report the results for experiments on socialscenarios that involve different levels of sophistication and thatdemonstrate both SFPS's capabilities and the sources of its power.Finally, we discuss how our approach to social planning has beeninformed by earlier work in the area and propose directions foradditional research on the topic.
Meta-Level and Domain-Level Processing in Task-Oriented Dialogue
Gabaldon, Alfredo (Carnegie Mellon University) | Langley, Pat (Carnegie Mellon University) | Meadows, Ben (University of Auckland)
There is general agreement that knowledge plays a key role in intelligent behavior, but most work on this topic has emphasized domain-specific expertise. We argue, in contrast, that cognitive systems also benefit from meta-level knowledge that has a domain-independent character. In this paper, we propose a representational framework that distinguishes between these two forms of content, along with an integrated architecture that supports their use for abductive interpretation and hierarchical skill execution. We demonstrate this framework's viability on high-level aspects of extended dialogue that require reasoning about, and altering, participants' beliefs and goals. Furthermore, we demonstrate its generality by showing that the meta-level knowledge operates with different domain-level content. We conclude by reviewing related work on these topics and discussing promising directions for future research.