Goto

Collaborating Authors

 Country


Using Artificial Neural Networks to Predict the Quality and Performance of Oil-Field Cements

AI Magazine

Inherent batch-to-batch variability, aging, and contamination are major factors contributing to variability in oil-field cement-slurry performance. Of particular concern are problems encountered when a slurry is formulated with one cement sample and used with a batch having different properties. Such variability imposes a heavy burden on performance testing and is often a major factor in operational failure. We describe methods that allow the identification, characterization, and prediction of the variability of oil-field cements. Our approach involves predicting cement compositions, particle-size distributions, and thickening-time curves from the diffuse reflectance infrared Fourier transform spectrum of neat cement powders. Predictions make use of artificial neural networks. Slurry formulation thickening times can be predicted with uncertainties of less than 10 percent. Composition and particle-size distributions can be predicted with uncertainties a little greater than measurement error, but general trends and differences between cements can be determined reliably. Our research shows that many key cement properties are captured within the Fourier transform infrared spectra of cement powders and can be predicted from these spectra using suitable neural network techniques. Several case studies are given to emphasize the use of these techniques, which provide the basis for a valuable quality control tool now finding commercial use in the oil field.


Integration of Knowledge and Neural Heuristics

AI Magazine

This article discusses the First International Symposium on Integrating Knowledge and Neural Heuristics, held on 9 to 10 May 1994 in Pensacola, Florida. The highlights of the event are summarized, organized according to the five areas of concentration at the conference: (1) integration methodolo-gies; (2) language, psychology, and cognitive science; (3) fuzzy logic; (4) learning; and (5) applications.


Diagnosing Delivery Problems in the White House Information-Distribution System

AI Magazine

As part of a collaboration with the White House Office of Media Affairs, members of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology designed a system, called COMLINK, that distributes a daily stream of documents released by the Office of Media Affairs. Approximately 4,000 direct subscribers receive information from this service, but more than 100,000 people receive the information through redistribution channels. The information is distributed through e-mail and the World Wide Web. In such a large-scale distribution scheme, there is a constant problem of subscriptions becoming invalid because the user's e-mail account has terminated. These invalid subscriptions cause a backwash of hundreds of bounced-mail messages each day that must be processed by the operators of the COMLINK system. To manage this annoying but necessary task, an expert system named BMES was developed to diagnose the failures of information delivery.


Exploiting Causal Independence in Bayesian Network Inference

Journal of Artificial Intelligence Research

A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as ``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.


Quantitative Results Comparing Three Intelligent Interfaces forInformation Capture: A Case Study Adding Name Information into a

Journal of Artificial Intelligence Research

Efficiently entering information into a computer is key to enjoying the benefits of computing. This paper describes three intelligent user interfaces: handwriting recognition, adaptive menus, and predictive fillin. In the context of adding a person's name and address to an electronic organizer, tests show handwriting recognition is slower than typing on an on-screen, soft keyboard, while adaptive menus and predictive fillin can be twice as fast. This paper also presents strategies for applying these three interfaces to other information collection domains.


Learning First-Order Definitions of Functions

Journal of Artificial Intelligence Research

First-order learning involves finding a clause-form definition of a relation from examples of the relation and relevant background information. In this paper, a particular first-order learning system is modified to customize it for finding definitions of functional relations. This restriction leads to faster learning times and, in some cases, to definitions that have higher predictive accuracy. Other first-order learning systems might benefit from similar specialization.


Using Anytime Algorithms in Intelligent Systems

AI Magazine

Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, real-time diagnosis and repair, and mobile robot control. What characterizes these domains is that it is not feasible (computationally) or desirable (economically) to compute the optimal answer. This article surveys the main control problems that arise when a system is composed of several anytime algorithms. These problems relate to optimal management of uncertainty and precision. After a brief introduction to anytime computation, I outline a wide range of existing solutions to the metalevel control problem and describe current work that is aimed at increasing the applicability of anytime computation.


From Data Mining to Knowledge Discovery in Databases

AI Magazine

Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges involved in real-world applications of knowledge discovery, and current and future research directions in the field.


Immobile Robots AI in the New Millennium

AI Magazine

A new generation of sensor-rich, massively distributed, autonomous systems are being developed that have the potential for profound social, environmental, and economic change. These systems include networked building energy systems, autonomous space probes, chemical plant control systems, satellite constellations for remote ecosystem monitoring, power grids, biospherelike life-support systems, and reconfigurable traffic systems, to highlight but a few. To achieve high performance, these immobile robots (or immobots) will need to develop sophisticated regulatory and immune systems that accurately and robustly control their complex internal functions. Thus, immobots will exploit a vast nervous system of sensors to model themselves and their environment on a grand scale. They will use these models to dramatically reconfigure themselves to survive decades of autonomous operation. Achieving these large-scale modeling and configuration tasks will require a tight coupling between the higher-level coordination function provided by symbolic reasoning and the lower-level autonomic processes of adaptive estimation and control. To be economically viable, they will need to be programmable purely through high-level compositional models. Self-modeling and self-configuration, autonomic functions coordinated through symbolic reasoning, and compositional, model-based programming are the three key elements of a model-based autonomous system architecture that is taking us into the new millennium.


Steps toward Formalizing Context

AI Magazine

The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion.