Goto

Collaborating Authors

 Agents


An AI-Based Approach to Destination Control in Elevators

AI Magazine

Not widely known by the AI community, elevator control has become a major field of application for AI technologies. Techniques such as neural networks, genetic algorithms, fuzzy rules and, recently, multiagent systems and AI planning have been adopted by leading elevator companies not only to improve the transportation capacity of conventional elevator systems but also to revolutionize the way in which elevators interact with and serve passengers. In this article, we begin with an overview of AI techniques adopted by this industry and explain the motivations behind the continuous interest in AI. In the second part, we present in more detail a recent development project to apply AI planning and multiagent systems to elevator control problems.


Strategic Design of Mobile Agents

AI Magazine

For many individuals and organizations think example, software programs can be used to about and perform their work. Electronic commerce--the obtain cheaper prices for utilities such as basic conduct of business activities electronically telephone services. A simple program can be by digital media--is now part of installed to monitor and direct long distance everyday business. The user dials the country code, and as sharp falls in the share prices of many "dotcoms" he/she continues to dial the telephone number, since early 2000, electronic commerce is the program contacts various long distance still likely to have a major and lasting effect on providers and negotiates the best deal for its most forms of economic activities. The program can be set up to inform the Advances in web-based technologies further user about the price before the call is connected; support the growth of electronic commerce. In for example, the best rate for this call is nine particular, automation and delegation technologies--known cents a minute with no minimum charge.


Specifying Rules for Electronic Auctions

AI Magazine

We examine the design space of auction mechanisms and identify three core activities that structure this space. Formal parameters qualifying the performance of core activities enable precise specification of auction rules. This specification constitutes an auction description language that can be used in the implementation of configurable marketplaces. The specification also provides a framework for organizing previous work and identifying new possibilities in auction design.


AI and Agents: State of the Art

AI Magazine

This article is a reflection on agent-based AI. My contention is that AI research should focus on interactive, autonomous systems, that is, agents. Emergent technologies demand so. We see how recent developments in (multi-) agent-oriented research have taken us closer to the original AI goal, namely, to build intelligent systems of general competence. Agents are not the panacea though. I point out several areas such as design description, implementation, reusability, and security that must be developed before agents are universally accepted as the AI of the future.


Leveled-Commitment Contracting: A Backtracking Instrument for Multiagent Systems

AI Magazine

In (automated) negotiation systems for self-interested agents, contracts have traditionally been binding. They do not accommodate future events. Contingency contracts address this but are often impractical. As an alternative, we propose leveledcommitment contracts. The level of commitment is set by decommitting penalties. To be freed from the contract, an agent simply pays its penalty to the other contract party(ies). A self-interested agent will be ruluctant to decommit because some other contract party might decommit, in which case the former agent gets freed from the contract, does not incur a penalty, and collects a penalty from the other party. We show that despite such strategic decommitting, leveled commitment increases the expected payoffs of all contract parties and can enable deals that are impossible under full commitment. Different decommitting mechanisms are introduced and compared. Practical prescriptions for market designers are presented. A contract optimizer, ECOMMITTER, is provided on the web.


An AI-Based Approach to Destination Control in Elevators

AI Magazine

Not widely known by the AI community, elevator control has become a major field of application for AI technologies. Techniques such as neural networks, genetic algorithms, fuzzy rules and, recently, multiagent systems and AI planning have been adopted by leading elevator companies not only to improve the transportation capacity of conventional elevator systems but also to revolutionize the way in which elevators interact with and serve passengers. In this article, we begin with an overview of AI techniques adopted by this industry and explain the motivations behind the continuous interest in AI. We review and summarize publications that are not easily accessible from the common AI sources. In the second part, we present in more detail a recent development project to apply AI planning and multiagent systems to elevator control problems.


A Review of the Twenty-Second SOAR Workshop

AI Magazine

SOAR is one of the oldest and largest AI development efforts, starting formally in 1983. It has also been proposed as a unified theory of cognition (Newell 1990). Most of its current development is as an AI programming language, which was evident at the Twenty-Second SOAR Workshop held at Soar Technology near the University of Michigan in Ann Arbor on 1-2 June 2002.


Towards Adjustable Autonomy for the Real World

Journal of Artificial Intelligence Research

Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agent's team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agent's pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities.


Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach

Journal of Artificial Intelligence Research

Recent years are seeing an increasing need for on-line monitoring of teams of cooperating agents, e.g., for visualization, or performance tracking. However, in monitoring deployed teams, we often cannot rely on the agents to always communicate their state to the monitoring system. This paper presents a non-intrusive approach to monitoring by 'overhearing', where the monitored team's state is inferred (via plan-recognition) from team-members' *routine* communications, exchanged as part of their coordinated task execution, and observed (overheard) by the monitoring system. Key challenges in this approach include the demanding run-time requirements of monitoring, the scarceness of observations (increasing monitoring uncertainty), and the need to scale-up monitoring to address potentially large teams. To address these, we present a set of complementary novel techniques, exploiting knowledge of the social structures and procedures in the monitored team: (i) an efficient probabilistic plan-recognition algorithm, well-suited for processing communications as observations; (ii) an approach to exploiting knowledge of the team's social behavior to predict future observations during execution (reducing monitoring uncertainty); and (iii) monitoring algorithms that trade expressivity for scalability, representing only certain useful monitoring hypotheses, but allowing for any number of agents and their different activities to be represented in a single coherent entity. We present an empirical evaluation of these techniques, in combination and apart, in monitoring a deployed team of agents, running on machines physically distributed across the country, and engaged in complex, dynamic task execution. We also compare the performance of these techniques to human expert and novice monitors, and show that the techniques presented are capable of monitoring at human-expert levels, despite the difficulty of the task.


Interchanging Agents and Humans in Military Simulation

AI Magazine

The innovative reapplication of a multiagent system for human-in-the-loop (HIL) simulation was a consequence of appropriate agent-oriented design. The use of intelligent agents for simulating human decision making offers the potential for analysis and design methodologies that do not distinguish between agent and human until implementation. With this as a driver in the design process, the construction of systems in which humans and agents can be interchanged is simplified. The experiences gained from this process indicate that it is simpler, both in design and implementation, to add humans to a system designed for intelligent agents than it is to add intelligent agents to a system designed for humans.