Steps toward Formalizing Context
The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion.
Using Anytime Algorithms in Intelligent Systems
Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, real-time diagnosis and repair, and mobile robot control. What characterizes these domains is that it is not feasible (computationally) or desirable (economically) to compute the optimal answer. This article surveys the main control problems that arise when a system is composed of several anytime algorithms. These problems relate to optimal management of uncertainty and precision. After a brief introduction to anytime computation, I outline a wide range of existing solutions to the metalevel control problem and describe current work that is aimed at increasing the applicability of anytime computation.
Empirical Methods in Artificial Intelligence: A Review
Paul Cohen's book Empirical Methods for Artificial Intelligence aims to encourage this trend by providing AI practitioners with the knowledge and tools needed for careful empirical evaluation. The volume provides broad coverage of experimental design and statistics, ranging from a gentle introduction of basic ideas to a detailed presentation of advanced techniques, often combined with illustrative examples of their application to the empirical study of AI. The book is generally well written, clearly organized, and easy to understand; it contains some mathematics -- but not enough to overwhelm readers. Examples come from AI work on planning, machine learning, natural language, and diagnosis.
From Data Mining to Knowledge Discovery in Databases
Fayyad, Usama, Piatetsky-Shapiro, Gregory, Smyth, Padhraic
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges involved in real-world applications of knowledge discovery, and current and future research directions in the field.
Immobile Robots AI in the New Millennium
Williams, Brian C., Nayak, P. Pandurang
A new generation of sensor-rich, massively distributed, autonomous systems are being developed that have the potential for profound social, environmental, and economic change. These systems include networked building energy systems, autonomous space probes, chemical plant control systems, satellite constellations for remote ecosystem monitoring, power grids, biospherelike life-support systems, and reconfigurable traffic systems, to highlight but a few. To achieve high performance, these immobile robots (or immobots) will need to develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. Thus, immobots will exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves to survive decades of autonomous operation. Achieving these large-scale modeling and configuration tasks will require a tight coupling between the higher-level coordination function provided by symbolic reasoning and the lower-level autonomic processes of adaptive estimation and control. To be economically viable, they will need to be programmable purely through high-level compositional models. Self-modeling and self-configuration, autonomic functions coordinated through symbolic reasoning, and compositional, model-based programming are the three key elements of a model-based autonomous system architecture that is taking us into the new millennium.
Steps toward Formalizing Context
The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion.
The 1996 Simon Newcomb Award
Ford, Kenneth M., Hayes, Patrick J.
Simon Newcomb was a distinguished astronomer and computer who "proved" that heavier- than-air flight was impossible. His proofs are ingenious, cleverly argued, quite convincing to many of his contemporaries, and utterly wrong. The Simon Newcomb Award is given annually for the silliest published argument attacking AI. Our subject may be unique in the virulence and frequency with which it is attacked, both in the popular media and among the cultured intelligentsia. Recent articles have argued that the very idea of AI reflects a cancer in the heart of our culture and have proven (yet again) that it is impossible. While many of these attacks are cited widely, most of them are ridiculous to anyone with an appropriate technical education.
Fully Automated Design of Super-High-Rise Building Structures by a Hybrid AI Model on a Massively Parallel Machine
This article presents an innovative research project (sponsored by the National Science Foundation, the American Iron and Steel Institute, and the American Institute of Steel Construction) where computationally elegant algorithms based on the integration of a novel connectionist computing model, mathematical optimization, and a massively parallel computer architecture are used to automate the complex process of engineering design.
Cue Phrase Classification Using Machine Learning
Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. Correctly classifying cue phrases as discourse or sentential is critical in natural language processing systems that exploit discourse structure, e.g., for performing tasks such as anaphora resolution and plan recognition. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification models from sets of pre-classified cue phrases and their features in text and speech. Machine learning is shown to be an effective technique for not only automating the generation of classification models, but also for improving upon previous results. When compared to manually derived classification models already in the literature, the learned models often perform with higher accuracy and contain new linguistic insights into the data. In addition, the ability to automatically construct classification models makes it easier to comparatively analyze the utility of alternative feature representations of the data. Finally, the ease of retraining makes the learning approach more scalable and flexible than manual methods.
Accelerating Partial-Order Planners: Some Techniques for Effective Search Control and Pruning
We propose some domain-independent techniques for bringing well-founded partial-order planners closer to practicality. The first two techniques are aimed at improving search control while keeping overhead costs low. One is based on a simple adjustment to the default A* heuristic used by UCPOP to select plans for refinement. The other is based on preferring ``zero commitment'' (forced) plan refinements whenever possible, and using LIFO prioritization otherwise. A more radical technique is the use of operator parameter domains to prune search. These domains are initially computed from the definitions of the operators and the initial and goal conditions, using a polynomial-time algorithm that propagates sets of constants through the operator graph, starting in the initial conditions. During planning, parameter domains can be used to prune nonviable operator instances and to remove spurious clobbering threats. In experiments based on modifications of UCPOP, our improved plan and goal selection strategies gave speedups by factors ranging from 5 to more than 1000 for a variety of problems that are nontrivial for the unmodified version. Crucially, the hardest problems gave the greatest improvements. The pruning technique based on parameter domains often gave speedups by an order of magnitude or more for difficult problems, both with the default UCPOP search strategy and with our improved strategy. The Lisp code for our techniques and for the test problems is provided in on-line appendices.