Emotionally Intelligent Agents: The Outline of a Resource-Oriented Approach Alastair Burt

AAAI Conferences

In the last few years, intelligent agents have become an important paradigm for software development, particularly on the Internet (Bradshaw 1997). There are several reasons believe that emotions and related concepts will become increasingly relevant for the design of such intelligent agents. First, agents are being used to implement lifelike characters in software for games, interactive drama, and general user interfaces. The agent-oriented paradigm, where actions are generated autonomously from high-level goals, is arguably the best way to program them (Reynolds 1987). In all forms of lifelike characters, the user is invited to suspend disbelief and treat the object on the screen as if it were a real person or animal.


A Survey of Research in Distributed, Continual Planning

AI Magazine

Complex, real-world domains require rethinking traditional approaches to AI planning. Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We argue that developing DCP systems will be necessary for planning applications to be successful in these environments.


Dynamic Planning Networks

arXiv.org Machine Learning

We introduce Dynamic Planning Networks (DPN), a novel architecture for deep reinforcement learning, that combines model-based and model-free aspects for online planning. Our architecture learns to dynamically construct plans using a learned state-transition model by selecting and traversing between simulated states and actions to maximize information before acting. In contrast to model-free methods, model-based planning lets the agent efficiently test action hypotheses without performing costly trial-and-error in the environment. DPN learns to efficiently form plans by expanding a single action-conditional state transition at a time instead of exhaustively evaluating each action, reducing the required number of state-transitions during planning by up to 96%. We observe various emergent planning patterns used to solve environments, including classical search methods such as breadth-first and depth-first search. DPN shows improved data efficiency, performance, and generalization to new and unseen domains in comparison to several baselines.



A Survey of Research in Distributed, Continual Planning

AI Magazine

Complex, real-world domains require rethinking traditional approaches to AI planning. Planning and executing the resulting plans in a dynamic environment implies a continual approach in which planning and execution are interleaved, uncertainty in the current and projected world state is recognized and handled appropriately, and replanning can be performed when the situation changes or planned actions fail. Furthermore, complex planning and execution problems may require multiple computational agents and human planners to collaborate on a solution. In this article, we describe a new paradigm for planning in complex, dynamic environments, which we term distributed, continual planning (DCP). We argue that developing DCP systems will be necessary for planning applications to be successful in these environments. We give a historical overview of research leading to the current state of the art in DCP and describe research in distributed and continual planning.