Plotting

 Information Technology


Immobile Robots AI in the New Millennium

AI Magazine

A new generation of sensor-rich, massively distributed, autonomous systems are being developed that have the potential for profound social, environmental, and economic change. These systems include networked building energy systems, autonomous space probes, chemical plant control systems, satellite constellations for remote ecosystem monitoring, power grids, biospherelike life-support systems, and reconfigurable traffic systems, to highlight but a few. To achieve high performance, these immobile robots (or immobots) will need to develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. Thus, immobots will exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves to survive decades of autonomous operation. Achieving these large-scale modeling and configuration tasks will require a tight coupling between the higher-level coordination function provided by symbolic reasoning and the lower-level autonomic processes of adaptive estimation and control. To be economically viable, they will need to be programmable purely through high-level compositional models. Self-modeling and self-configuration, autonomic functions coordinated through symbolic reasoning, and compositional, model-based programming are the three key elements of a model-based autonomous system architecture that is taking us into the new millennium.


Steps toward Formalizing Context

AI Magazine

The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion.


The 1996 Simon Newcomb Award

AI Magazine

Simon Newcomb was a distinguished astronomer and computer who "proved" that heavier- than-air flight was impossible. His proofs are ingenious, cleverly argued, quite convincing to many of his contemporaries, and utterly wrong. The Simon Newcomb Award is given annually for the silliest published argument attacking AI. Our subject may be unique in the virulence and frequency with which it is attacked, both in the popular media and among the cultured intelligentsia. Recent articles have argued that the very idea of AI reflects a cancer in the heart of our culture and have proven (yet again) that it is impossible. While many of these attacks are cited widely, most of them are ridiculous to anyone with an appropriate technical education.


Fully Automated Design of Super-High-Rise Building Structures by a Hybrid AI Model on a Massively Parallel Machine

AI Magazine

This article presents an innovative research project (sponsored by the National Science Foundation, the American Iron and Steel Institute, and the American Institute of Steel Construction) where computationally elegant algorithms based on the integration of a novel connectionist computing model, mathematical optimization, and a massively parallel computer architecture are used to automate the complex process of engineering design.


Cue Phrase Classification Using Machine Learning

Journal of Artificial Intelligence Research

Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. Correctly classifying cue phrases as discourse or sentential is critical in natural language processing systems that exploit discourse structure, e.g., for performing tasks such as anaphora resolution and plan recognition. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification models from sets of pre-classified cue phrases and their features in text and speech. Machine learning is shown to be an effective technique for not only automating the generation of classification models, but also for improving upon previous results. When compared to manually derived classification models already in the literature, the learned models often perform with higher accuracy and contain new linguistic insights into the data. In addition, the ability to automatically construct classification models makes it easier to comparatively analyze the utility of alternative feature representations of the data. Finally, the ease of retraining makes the learning approach more scalable and flexible than manual methods.


Accelerating Partial-Order Planners: Some Techniques for Effective Search Control and Pruning

Journal of Artificial Intelligence Research

We propose some domain-independent techniques for bringing well-founded partial-order planners closer to practicality. The first two techniques are aimed at improving search control while keeping overhead costs low. One is based on a simple adjustment to the default A* heuristic used by UCPOP to select plans for refinement. The other is based on preferring ``zero commitment'' (forced) plan refinements whenever possible, and using LIFO prioritization otherwise. A more radical technique is the use of operator parameter domains to prune search. These domains are initially computed from the definitions of the operators and the initial and goal conditions, using a polynomial-time algorithm that propagates sets of constants through the operator graph, starting in the initial conditions. During planning, parameter domains can be used to prune nonviable operator instances and to remove spurious clobbering threats. In experiments based on modifications of UCPOP, our improved plan and goal selection strategies gave speedups by factors ranging from 5 to more than 1000 for a variety of problems that are nontrivial for the unmodified version. Crucially, the hardest problems gave the greatest improvements. The pruning technique based on parameter domains often gave speedups by an order of magnitude or more for difficult problems, both with the default UCPOP search strategy and with our improved strategy. The Lisp code for our techniques and for the test problems is provided in on-line appendices.


Spatial Aggregation: Theory and Applications

Journal of Artificial Intelligence Research

Visual thinking plays an important role in scientific reasoning. Based on the research in automating diverse reasoning tasks about dynamical systems, nonlinear controllers, kinematic mechanisms, and fluid motion, we have identified a style of visual thinking, imagistic reasoning. Imagistic reasoning organizes computations around image-like, analogue representations so that perceptual and symbolic operations can be brought to bear to infer structure and behavior. Programs incorporating imagistic reasoning have been shown to perform at an expert level in domains that defy current analytic or numerical methods. We have developed a computational paradigm, spatial aggregation, to unify the description of a class of imagistic problem solvers. A program written in this paradigm has the following properties. It takes a continuous field and optional objective functions as input, and produces high-level descriptions of structure, behavior, or control actions. It computes a multi-layer of intermediate representations, called spatial aggregates, by forming equivalence classes and adjacency relations. It employs a small set of generic operators such as aggregation, classification, and localization to perform bidirectional mapping between the information-rich field and successively more abstract spatial aggregates. It uses a data structure, the neighborhood graph, as a common interface to modularize computations. To illustrate our theory, we describe the computational structure of three implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the spatial aggregation generic operators by mixing and matching a library of commonly used routines.


A Hierarchy of Tractable Subsets for Computing Stable Models

Journal of Artificial Intelligence Research

Finding the stable models of a knowledge base is a significant computational problem in artificial intelligence. This task is at the computational heart of truth maintenance systems, autoepistemic logic, and default logic. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Omega_1,Omega_2,..., with the following properties: first, Omega_1 is the class of all stratified knowledge bases; second, if a knowledge base Pi is in Omega_k, then Pi has at most k stable models, and all of them may be found in time O(lnk), where l is the length of the knowledge base and n the number of atoms in Pi; third, for an arbitrary knowledge base Pi, we can find the minimum k such that Pi belongs to Omega_k in time polynomial in the size of Pi; and, last, where K is the class of all knowledge bases, it is the case that union{i=1 to infty} Omega_i = K, that is, every knowledge base belongs to some class in the hierarchy.


Eighth Workshop on the Validation and Verification of Knowledge-Based Systems

AI Magazine

The Workshop on the Validation and Verification of Knowledge-Based Systems gathers researchers from government, industry, and academia to present the most recent information about this important development aspect of knowledge-based systems (KBSs). The 1995 workshop focused on nontraditional KBSs that are developed using more than just the simple rule-based paradigm. This new focus showed how researchers are adjusting to the shift in KBS technology from stand-alone rule-based expert systems to embedded systems that use object-oriented technology, uncertainty, and nonmonotonic reasoning.


Life in the Fast Lane: The Evolution of an Adaptive Vehicle Control System

AI Magazine

Giving robots the ability to operate in the real world has been, and continues to be, one of the most difficult tasks in AI research. Their research has been focused on using adaptive, vision-based systems to increase the driving performance of the Navlab line of on-road mobile robots. This research has led to the development of a neural network system that can learn to drive on many road types simply by watching a human teacher. This article describes the evolution of this system from a research project in machine learning to a robust driving system capable of executing tactical driving maneuvers such as lane changing and intersection navigation.