Plotting

 Country


The 1999 Asia-Pacific Conference on Intelligent-Agent Technology

AI Magazine

Intelligent-agent technology is one of the most exciting, active areas of research and development in computer science and information technology today. The First Asia-Pacific Conference on Intelligent- Agent Technology (IAT'99) attracted researchers and practitioners from diverse fields such as computer science, information systems, business, telecommunications, manufacturing, human factors, psychology, education, and robotics to examine the design principles and performance characteristics of various approaches in agent technologies and, hence, fostered the cross-fertilization of ideas on the development of autonomous agents and multiagent systems among different domains.


2000 ACM Conference on Intelligent User Interfaces

AI Magazine

The 2000 Association of Computing Machinery Conference on Intelligent User Interfaces (IUI -- 2000) was held in New Orleans, Louisiana, from 9-12 January. This conference occupies the currently hot area that lies midway between the traditional fields of AI and computer-human interaction (CHI). For AI practitioners, this conference represents a good venue for learning about both how to design user interfaces for AI applications and how to use AI techniques to improve the user experience with more conventional applications. This year's conference drew the largest audience yet for an IUI conference, but the conference still remains at a manageable, single-track size. A wide range of high-quality presentations, tutorials, demonstrations, and invited speakers provided a bridge between the AI and CHI communities.


Model-Based Diagnosis under Real-World Constraints

AI Magazine

I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms.


On the Compilability and Expressive Power of Propositional Planning Formalisms

Journal of Artificial Intelligence Research

The recent approaches of extending the GRAPHPLAN algorithm to handle more expressive planning formalisms raise the question of what the formal meaning of ``expressive power'' is. We formalize the intuition that expressive power is a measure of how concisely planning domains and plans can be expressed in a particular formalism by introducing the notion of ``compilation schemes'' between planning formalisms. Using this notion, we analyze the expressiveness of a large family of propositional planning formalisms, ranging from basic STRIPS to a formalism with conditional effects, partial state specifications, and propositional formulae in the preconditions. One of the results is that conditional effects cannot be compiled away if plan size should grow only linearly but can be compiled away if we allow for polynomial growth of the resulting plans. This result confirms that the recently proposed extensions to the GRAPHPLAN algorithm concerning conditional effects are optimal with respect to the ``compilability'' framework. Another result is that general propositional formulae cannot be compiled into conditional effects if the plan size should be preserved linearly. This implies that allowing general propositional formulae in preconditions and effect conditions adds another level of difficulty in generating a plan.


Axiomatizing Causal Reasoning

Journal of Artificial Intelligence Research

Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl (1997, 1998). In addition, the complexity of the decision procedures is characterized for all the languages and classes of models considered.


Backbone Fragility and the Local Search Cost Peak

Journal of Artificial Intelligence Research

The local search algorithm WSat is one of the most successful algorithms for solving the satisfiability (SAT) problem. It is notably effective at solving hard Random 3-SAT instances near the so-called `satisfiability threshold', but still shows a peak in search cost near the threshold and large variations in cost over different instances. We make a number of significant contributions to the analysis of WSat on high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are entailed by an instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We show a very strong correlation between search cost and the Hamming distance to the nearest solution early in WSat's search. This pattern leads us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for local search are those with very large backbones which are also backbone-fragile. We suggest that the decay in cost beyond the satisfiability threshold is due to increasing backbone robustness (the opposite of backbone fragility). Our hypothesis makes three correct predictions. First, that the backbone robustness of an instance is negatively correlated with the local search cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone. In understanding the pathologies of local search methods, we hope to contribute to the development of new and better techniques.


A Theory of Universal Artificial Intelligence based on Algorithmic Complexity

arXiv.org Artificial Intelligence

Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI-tl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXI-tl is of the order tx2^l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.


Vision, Strategy, and Localization Using the Sony Robots at RoboCup-98

AI Magazine

Sony has provided a robot platform for research and development in physical agents, namely, fully autonomous legged robots. In this article, we describe our work using Sony's legged robots to participate at the RoboCup-98 legged robot demonstration and competition. Robotic soccer represents a challenging environment for research in systems with multiple robots that need to achieve concrete objectives, particularly in the presence of an adversary. Furthermore, RoboCup offers an excellent opportunity for robot entertainment. We introduce the RoboCup context and briefly present Sony's legged robot. We developed a vision-based navigation and a Bayesian localization algorithm. Team strategy is achieved through predefined behaviors and learning by instruction.


Three RoboCup Simulation League Commentator Systems

AI Magazine

The information it provides a dynamic, real-time environment units resulting from such an analysis in which it is still relatively easy for tasks to be encode a deeper understanding of the timevarying classified, monitored, and assessed. Moreover, scene to be described. They include a commentary system has severe time restrictions spatial relations for the explicit characterization imposed by the flow of the game and is of spatial arrangements of objects as well thus a good test bed for research into real-time as representations of recognized object movements.


Workshop on Intelligent Information Integration (III-99)

AI Magazine

The Workshop on Intelligent Information Integration (III), organized in conjunction with the Sixteenth International Joint Conference on Artificial Intelligence, was held on 31 July 1999 in Stockholm, Sweden. Approximately 40 people participated, and nearly 20 papers were presented. This packed workshop schedule resulted from a large number of submissions that made it difficult to reserve discussion time without rejecting an unproportionately large number of papers. Participants included scientists and practitioners from industry and academia. Topics included query planning, applications of III, mediator architectures, and the use of ontologies for III.