Goto

Collaborating Authors

 Agents


Coordination through Joint Intentions in Industrial Multiagent Systems

AI Magazine

My Ph.D. dissertation develops and implements a new model of multiagent coordination, called JOINT RESPONSIBILITY (Jennings 1992b), based on the notion of joint intentions. The responsibility framework was devised specifically for coordinating behavior in complex, unpredictable, and dynamic environments, such as industrial control. The need for such a principled model became apparent during the development and the application of a general-purpose cooperation framework (GRATE) to two real-world industrial applications.


Intelligence without Robots: A Reply to Brooks

AI Magazine

In his recent papers, entitled Intelligence without Representation and Intelligence without Reason, Brooks argues for mobile robots as the foundation of AI research. This article argues that even if we seek to investigate complete agents in real-world environments, robotics is neither necessary nor sufficient as a basis for AI research. The article proposes real-world software environments, such as operating systems or databases, as a complementary substrate for intelligent-agent research and considers the relative advantages of software environments as test beds for AI. First, the cost, effort, and expertise necessary to develop and systematically experiment with software artifacts are relatively low. Second, software environments circumvent many thorny but peripheral research issues that are inescapable in physical environments. Brooks's mobile robots tug AI toward a bottom-up focus in which the mechanics of perception and mobility mingle inextricably with or even supersede core AI research. In contrast, the softbots (software robots) I advocate facilitate the study of classical AI problems in real-world (albeit, software) domains. For example, the UNIX softbot under development at the University of Washington has led us to investigate planning with incomplete information, interleaving planning and execution, and a host of related high-level issues.


Software Agents: Completing Patterns and Constructing User Interfaces

Journal of Artificial Intelligence Research

To support the goal of allowing users to record and retrieve information, this paper describes an interactive note-taking system for pen-based computers with two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom, button-box user interface on request. The system is an example of a learning-apprentice software- agent. A machine learning component characterizes the syntax and semantics of the user's information. A performance system uses this learned information to generate completion strings and construct a user interface. Description of Online Appendix: People like to record information. Doing this on paper is initially efficient, but lacks flexibility. Recording information on a computer is less efficient but more powerful. In our new note taking softwre, the user records information directly on a computer. Behind the interface, an agent acts for the user. To help, it provides defaults and constructs a custom user interface. The demonstration is a QuickTime movie of the note taking agent in action. The file is a binhexed self-extracting archive. Macintosh utilities for binhex are available from mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the dts/mac/sys.soft/quicktime.


Pagoda: A Model for Autonomous Learning in Probabilistic Domains

AI Magazine

My Ph.D. dissertation describes PAGODA (probabilistic autonomous goal-directed agent), a model for an intelligent agent that learns autonomously in domains containing uncertainty. The ultimate goal of this line of research is to develop intelligent problem-solving and planning systems that operate in complex domains, largely function autonomously, use whatever knowledge is available to them, and learn from their experience. PAGODA was motivated by two specific requirements: The agent should be capable of operating with minimal intervention from humans, and it should be able to cope with uncertainty (which can be the result of inaccurate sensors, a nondeterministic environment, complexity, or sensory limitations). I argue that the principles of probability theory and decision theory can be used to build rational agents that satisfy these requirements.


Pagoda: A Model for Autonomous Learning in Probabilistic Domains

AI Magazine

My Ph.D. dissertation describes PAGODA (probabilistic autonomous goal-directed agent), a model for an intelligent agent that learns autonomously in domains containing uncertainty. The ultimate goal of this line of research is to develop intelligent problem-solving and planning systems that operate in complex domains, largely function autonomously, use whatever knowledge is available to them, and learn from their experience. PAGODA was motivated by two specific requirements: The agent should be capable of operating with minimal intervention from humans, and it should be able to cope with uncertainty (which can be the result of inaccurate sensors, a nondeterministic environment, complexity, or sensory limitations). I argue that the principles of probability theory and decision theory can be used to build rational agents that satisfy these requirements.



AAAI Workshop on Cooperation Among Heterogeneous Intelligent Agents

AI Magazine

Recent attempts to develop larger and more complex knowledge-based systems have revealed the shortcomings and problems of centralized, single-agent architectures and have acted as a springboard for research in distributed AI (DAI). Although initial research efforts in DAI concentrated on issues relating to homogeneous systems (that is, systems using agents of a similar type or with similar knowledge), there is now increasing interest in systems comprised of heterogeneous components. The workshop on cooperation among heterogeneous intelligent agents, held July 15 during the 1991 National Conference on Artificial Intelligence, was organized by Evangelos Simoudis, Mark Adler, Michael Huhns, and Edmund Durfee. It was designed to bring together researchers and practitioners who are studying how to enable a heterogeneous collection of independent intelligent systems to cooperate in solving problems that require their combined abilities.


AAAI Workshop on Cooperation Among Heterogeneous Intelligent Agents

AI Magazine

We summarize the Among the workshop's principal The in using these systems, and (6) computer represent the same knowledge differently workshop on cooperation among environments that facilitate to optimize their particular use heterogeneous intelligent agents, cooperation among human problem of it, or agents could obtain knowledge held July 15 during the 1991 National solvers of diverse abilities. DAI system can use as agents a collection and Edmund Durfee. It was designed Fifty submissions were received, and of existing knowledge-based to bring together researchers and 43 contributors were invited to the systems that have been developed practitioners who are studying how workshop. The workshop had four under a variety of implementation to enable a heterogeneous collection sessions that covered the topics of philosophies. In particular, representations create a special type of agent that is Fifth, agents negotiate and converge must be agreed on able to act as a broker to each of the on decisions by making deals (either before invocation or as a existing agents that need to participate under various types of pressure. Methods must also in a blackboard architecture, so it can be created for agents to assimilate cooperate with other agents.


AAAI 1991 Fall Symposium Series Reports

AI Magazine

The Association for the Advancement of Artificial Intelligence held its 1991 Fall Symposium Series on November 15-17 at the Asilomar Conference Center, Pacific Grove, California. This article contains summaries of the four symposia: Discourse Structure in Natural Language Understanding and Generation, Knowledge and Action at Social and Organizational Levels, Principles of Hybrid Reasoning, Sensory Aspects of Robotic Intelligence.


On Seeing Robots

Classics

. It is argued that Situated Agents should be designed using a unitaryon-line computational model. The Constraint Net model of Zhang and Mackworth satisfiesthat requirement. Two systems for situated perception built in our laboratory are describedto illustrate the new approach: one for visual monitoring of a robot’s arm, the other forreal-time visual control of multiple robots competing and cooperating in a dynamic world.First proposal for robot soccer.Proc. VI-92, 1992. later published in a book Computer Vision: System, Theory, and Applications, pages 1-13, World Scientific Press, Singapore, 1993.