Introduction The problem of action selection, selecting appropriate actions to perform in a given situation, has received a great deal of attention in AI. Much of the recent discussion about action selection has concerned the distinction between planning" and "reaction, terms with which few researchers have been entirely happy. While there is broad consensus that planning and reaction are two points on a spectrum rather than opposing sides of a dichotomy, we believe that the use of these terms has encouraged the conflation of other, more basic, distinctions. Furthermore, it has encouraged researchers seeking intermediate points in the planning/reaction spectrum to consider mostly hybrid systems formed by fusing pure planning with pure reaction, rather than searching for truly novel algorithms. This paper is a purely personal view of the field as we see it.
We develop a general framework for agent abstraction based on the situation calculus and the ConGolog agent programming language. We assume that we have a high-level specification and a low-level specification of the agent, both represented as basic action theories. A refinement mapping specifies how each high-level action is implemented by a low-level ConGolog program and how each high-level fluent can be translated into a low-level formula. We define a notion of sound abstraction between such action theories in terms of the existence of a suitable bisimulation between their respective models. Sound abstractions have many useful properties that ensure that we can reason about the agent's actions (e.g., executability, projection, and planning) at the abstract level, and refine and concretely execute them at the low level. We also characterize the notion of complete abstraction where all actions (including exogenous ones) that the high level thinks can happen can in fact occur at the low level.
In this work, we look at the challenge of learning in an action game,Infinite Mario. Learning to play an action game can be divided intotwo distinct but related problems, learning an object-relatedbehavior and selecting a primitive action. We propose a framework that allows for the use of reinforcement learning for both ofthese problems. We present promising results in some instances of thegame and identify some problems that might affect learning.
Although there has been substantial progress in artificial intelligence for many years, research tended to focus on solving problems without regard to any real external environment or to the notion of a reasoning agent. In other words, the problems and their solutions, while significant, were limited in that they were divorced from real situations. More recently, however, the importance of these limitations has been recognised. One consequence is the rapid growth of interest in the design and construction of agents as systems exhibiting intelligent behaviour. Concepts of agents and agency are increasingly being used in a range of areas in artificial intelligence (AI) and computer science (Wooldridge & Jennings 1995).