Goto

Collaborating Authors

OSCAR: An Agent Architecture Based on Defeasible Reasoning

AAAI Conferences

OSCAR is a fully implemented architecture for a cognitive agent, based largely on the author's work in philosophy concerning epistemology and practical cognition. The seminal idea is that a generally intelligent agent must be able to function in an environment in which it is ignorant of most matters of fact. The architecture incorporates a general-purpose defeasible reasoner, built on top of an efficient natural deduction reasoner for first-order logic. It is based upon a detailed theory about how the various aspects of epistemic and practical cognition should interact, and many of the details are driven by theoretical results concerning defeasible reasoning.


Logical Foundations for Decision-Theoretic Planning

AAAI Conferences

Decision-theoretic planning attempts to combine the resources of classical AI planning theory and contemporary decision theory. The basic idea is that plans are assigned expected values, and the planning agent chooses between competing plans in terms of those expected values. This complicates the search for plans in predictable ways, and the natural inclination of the planning theorist is to turn immediately to the task of modifying existing algorithms or finding new algorithms for use in decisiontheoretic-planning. The purpose of this paper is to emphasize that there are logical problems that must be solved before the decisiontheoretic-planning task is even well defined, and to propose solutions to those problems. The natural assumption is that if we can assign expected values to plans, then the choice between competing plans is made by simply choosing the competitor with the highest expected value. I will argue that this assumption is false. It fails because plans, unlike the acts that are the subject of classical decision theory, are structured objects that can be embedded within one another. Consider a planning agent residing in a realistic world in which both its goals and its knowledge change over time. What this rules out is the kind of toy planning problems often encountered in AI, where the planner has a small fixed set of goals and a fixed knowledge base and is able to plan once for all those goals simultaneously and then stop.



Kern-Isberner

AAAI Conferences

Defeasible argumentation and default reasoning are usually perceived as two similar, but distinct approaches to commonsense reasoning. In this paper, we combine these two fields by viewing (defeasible resp.


Bench-Capon

AAAI Conferences

There are two aspects of practical reasoning which present particular difficulties for current approaches to modelling practical reasoning through argumentation: temporal aspects, and the intrinsic worth of actions. Time is important because actions change the state of the world, we need to consider future states as well as past and present ones. Equally, it is often not what we do but the way that we do it that matters: the same future state may be reachable either through desirable or undesirable actions, and often also actions are done for their own sake rather than for the sake of their consequences. In this paper we will present a semantics for practical reasoning, based on a formalisation developed originally for reasoning about commands, in which actions and states are treated as of equal status. We will show how using these semantics facilitates the handling of the temporal aspects of practical reasoning, and enables, where appropriate, justification of actions without reference to their consequences.