Information Technology
Using Artificial Neural Networks to Predict the Quality and Performance of Oil-Field Cements
Coveney, P. V., Fletcher, P., Hughes, T. L.
Inherent batch-to-batch variability, aging, and contamination are major factors contributing to variability in oil-field cement-slurry performance. Of particular concern are problems encountered when a slurry is formulated with one cement sample and used with a batch having different properties. Such variability imposes a heavy burden on performance testing and is often a major factor in operational failure. We describe methods that allow the identification, characterization, and prediction of the variability of oil-field cements. Our approach involves predicting cement compositions, particle-size distributions, and thickening-time curves from the diffuse reflectance infrared Fourier transform spectrum of neat cement powders. Predictions make use of artificial neural networks. Slurry formulation thickening times can be predicted with uncertainties of less than 10 percent. Composition and particle-size distributions can be predicted with uncertainties a little greater than measurement error, but general trends and differences between cements can be determined reliably. Our research shows that many key cement properties are captured within the Fourier transform infrared spectra of cement powders and can be predicted from these spectra using suitable neural network techniques. Several case studies are given to emphasize the use of these techniques, which provide the basis for a valuable quality control tool now finding commercial use in the oil field.
Integration of Knowledge and Neural Heuristics
This article discusses the First International Symposium on Integrating Knowledge and Neural Heuristics, held on 9 to 10 May 1994 in Pensacola, Florida. The highlights of the event are summarized, organized according to the five areas of concentration at the conference: (1) integration methodolo-gies; (2) language, psychology, and cognitive science; (3) fuzzy logic; (4) learning; and (5) applications.
Diagnosing Delivery Problems in the White House Information-Distribution System
Nahabedian, Mark, Shrobe, Howard
As part of a collaboration with the White House Office of Media Affairs, members of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology designed a system, called COMLINK, that distributes a daily stream of documents released by the Office of Media Affairs. Approximately 4,000 direct subscribers receive information from this service, but more than 100,000 people receive the information through redistribution channels. The information is distributed through e-mail and the World Wide Web. In such a large-scale distribution scheme, there is a constant problem of subscriptions becoming invalid because the user's e-mail account has terminated. These invalid subscriptions cause a backwash of hundreds of bounced-mail messages each day that must be processed by the operators of the COMLINK system. To manage this annoying but necessary task, an expert system named BMES was developed to diagnose the failures of information delivery.
On the other hand ...
Ford, Kenneth M., Hayes, Patrick J., Agnew, Neil
This column, like many strange things in the modern world, was conceived in an email exchange. Someone said to an editor: "why not have a regular lighthearted column on AI topics?" The editor said: "what an excellent idea, and when will we get the first manuscript?" and the first person said: "oh but I didn't volunteer;" and the editor said: "listen, buddy, I can make your life very uncomfortable if I don't get some cooperation. We go to press next week." While looking for something to give him, we stumbled on this old manuscript, written years ago (with our esteemed colleague Neil Agnew, the Duke of York). Ever had an old sock that you try to throw away, but keep finding in the bottom of a drawer? This is a bit like that. Come to think of it, so is the frame problem. Anyway, you can't make an omelette without breaking eggs, so here is our first reflection. It's a variation on an old, old story ....
Exploiting Causal Independence in Bayesian Network Inference
A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as ``or'', ``sum'' or ``max'', on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.
Quantitative Results Comparing Three Intelligent Interfaces forInformation Capture: A Case Study Adding Name Information into a
Schlimmer, J. C., Wells, P. C.
Efficiently entering information into a computer is key to enjoying the benefits of computing. This paper describes three intelligent user interfaces: handwriting recognition, adaptive menus, and predictive fillin. In the context of adding a person's name and address to an electronic organizer, tests show handwriting recognition is slower than typing on an on-screen, soft keyboard, while adaptive menus and predictive fillin can be twice as fast. This paper also presents strategies for applying these three interfaces to other information collection domains.
Characterizations of Decomposable Dependency Models
Decomposable dependency models possess a number of interesting and useful properties. This paper presents new characterizations of decomposable models in terms of independence relationships, which are obtained by adding a single axiom to the well-known set characterizing dependency models that are isomorphic to undirected graphs. We also briefly discuss a potential application of our results to the problem of learning graphical models from data.
MUSE CSP: An Extension to the Constraint Satisfaction Problem
Helzerman, R. A., Harper, M. P.
This paper describes an extension to the constraint satisfaction problem (CSP) called MUSE CSP (MUltiply SEgmented Constraint Satisfaction Problem). This extension is especially useful for those problems which segment into multiple sets of partially shared variables. Such problems arise naturally in signal processing applications including computer vision, speech processing, and handwriting recognition. For these applications, it is often difficult to segment the data in only one way given the low-level information utilized by the segmentation algorithms. MUSE CSP can be used to compactly represent several similar instances of the constraint satisfaction problem. If multiple instances of a CSP have some common variables which have the same domains and constraints, then they can be combined into a single instance of a MUSE CSP, reducing the work required to apply the constraints. We introduce the concepts of MUSE node consistency, MUSE arc consistency, and MUSE path consistency. We then demonstrate how MUSE CSP can be used to compactly represent lexically ambiguous sentences and the multiple sentence hypotheses that are often generated by speech recognition algorithms so that grammar constraints can be used to provide parses for all syntactically correct sentences. Algorithms for MUSE arc and path consistency are provided. Finally, we discuss how to create a MUSE CSP from a set of CSPs which are labeled to indicate when the same variable is shared by more than a single CSP.
Learning First-Order Definitions of Functions
First-order learning involves finding a clause-form definition of a relation from examples of the relation and relevant background information. In this paper, a particular first-order learning system is modified to customize it for finding definitions of functional relations. This restriction leads to faster learning times and, in some cases, to definitions that have higher predictive accuracy. Other first-order learning systems might benefit from similar specialization.
Mechanisms for Automated Negotiation in State Oriented Domains
Zlotkin, G., Rosenschein, J. S.
This paper lays part of the groundwork for a domain theory of negotiation, that is, a way of classifying interactions so that it is clear, given a domain, which negotiation mechanisms and strategies are appropriate. We define State Oriented Domains, a general category of interaction. Necessary and sufficient conditions for cooperation are outlined. We use the notion of worth in an altered definition of utility, thus enabling agreements in a wider class of joint-goal reachable situations. An approach is offered for conflict resolution, and it is shown that even in a conflict situation, partial cooperative steps can be taken by interacting agents (that is, agents in fundamental conflict might still agree to cooperate up to a certain point). A Unified Negotiation Protocol (UNP) is developed that can be used in all types of encounters. It is shown that in certain borderline cooperative situations, a partial cooperative agreement (i.e., one that does not achieve all agents' goals) might be preferred by all agents, even though there exists a rational agreement that would achieve all their goals. Finally, we analyze cases where agents have incomplete information on the goals and worth of other agents. First we consider the case where agents' goals are private information, and we analyze what goal declaration strategies the agents might adopt to increase their utility. Then, we consider the situation where the agents' goals (and therefore stand-alone costs) are common knowledge, but the worth they attach to their goals is private information. We introduce two mechanisms, one 'strict', the other 'tolerant', and analyze their affects on the stability and efficiency of negotiation outcomes.