Regional Government
Operations for Learning with Graphical Models
This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, andthe manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximizationalgorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. The main original contributions here are the decompositiontechniques and the demonstration that graphical models provide a framework for understanding and developing complex learning algorithms.
A Report to ARPA on Twenty-First Century Intelligent Systems
Grosz, Barbara, Davis, Randall
This report stems from an April 1994 meeting, organized by AAAI at the suggestion of Steve Cross and Gio Wiederhold.1 The purpose of the meeting was to assist ARPA in defining an agenda for foundational AI research. Prior to the meeting, the fellows and officers of AAAI, as well as the report committee members, were asked to recommend areas in which major research thrusts could yield significant scientific gain -- with high potential impact on DOD applications -- over the next ten years. At the meeting, these suggestions and their relevance to current national needs and challenges in computing were discussed and debated. An initial draft of this report was circulated to the fellows and officers. The final report has benefited greatly from their comments and from textual revisions contributed by Joseph Halpern, Fernando Pereira, and Dana Nau.
DRAIR ADVISER: A Knowledge-Based System ofr Materiel-Deficiency Analysis
Robey, Brian L., Fink, Pamela K., Venkatesan, Sanjeev, Redfield, Carol L., Ferguson, Jerry W.
Southwest Research Institute and the U.S. Air Force Materiel Command designed and developed an automated system for the preparation of deficiency report analysis information reports (DRAIRs). A DRAIR provides Air Force engineers with an analysis of an aircraft item's performance history, including maintenance, supply, and cost. A DRAIR also recommends improvements for a deficient materiel or aircraft part. The successful design, development, and deployment of the DRAIR ADVISER system by applying a combination of knowledge-based system and database management techniques are the subject of this article.
DRAIR ADVISER: A Knowledge-Based System ofr Materiel-Deficiency Analysis
Robey, Brian L., Fink, Pamela K., Venkatesan, Sanjeev, Redfield, Carol L., Ferguson, Jerry W.
Engineers Doing so would reduce demands on the OR and equipment specialists responsible for the analysts and provide additional time for them troublesome part, or end item, review the to address more complex analysis problems. MDR to identify the possible cause(s) of failure. Further, with the turnover of personnel in the In the past, engineers and equipment military and the aging of the aircraft fleet, specialists have turned to operations research another objective was to capture expertise (OR) analysts to assist in item performance from personnel who are most knowledgeable analysis. This analysis is usually time consuming about specific aircraft systems and federal and personnel intensive and requires stock classes (FSCs) and make this expertise information from many Air Force data systems. Center (ALC), located at Tinker Air Force Base, data collection and analysis require two person-days. This document describes an item's to the automation of SOURCE DATA: The data used to prepare this report came from the following sources: 1) Product Performance Subsystem (G099), 2) Supportability analysis Forecasting Evaluation (SAFE), 3) Flying Hours (G099), 4) MICAP Hours (D165B), and 5) VAMOSC (D160B). MAINTENANCE DATA (D056): A total of 175 inherent failures occurred between JUL 1991 and JUN 1992, which translates into a Mean Time Between Maintenance Type-1 (MTBM-1) of 162 hours.
Knowledge-Based Systems Research and Applications in Japan, 1992
Feigenbaum, Edward A., Friedland, Peter E., Johnson, Bruce B., Nii, H. Penny, Schorr, Herbert, Shrobe, Howard, Engelmore, Robert S.
This article summarizes the findings of a 1992 study of knowledge-based systems research and applications in Japan. Representatives of universities and businesses were chosen by the Japan Technology Evaluation Center to investigate the state of the technology in Japan relative to the United States. The panel's report focused on applications, tools, and research and development in universities and industry and on major national projects.
Applied AI News
A simulated (Houston, Tex.) has selected Telepresence technology allows scientists The Consolidated Communications from the Advanced Technology Program signed a strategic alliance agreement Facility's Element Manager at the National Institute of Standards with Gensym (Cambridge, Mass.) to will allow data communications and Technology. The grant will use Gensym's G2 real-time expert system system operators to remotely configure, support Kurzweil AI's development of development tool. Chevron control and monitor the operation a spoken-language interface capable installations are using G2 to intelligently of the front-end processor, providing of controlling PC software applications monitor energy management simultaneous support for through natural language and process simulation in conjunction multiple manned space flight missions, instruction in combination with a with other systems. Logica Cambridge (Cambridge, Developers at Georgia Tech AT&T Universal Card Services England) is developing a virtual reality (Atlanta, Ga.) have designed a neural (Jacksonville, Fla.) has signed a multiyear application to improve presentation network modeling, control and diagnostic agreement with HNC (San Diego, of data for air traffic controllers. Falcon uses see the heights of different aircraft, linked to sensors and other data neural network technology to learn rather than just the altitudes displayed sources on the factory floor, the neural and identify unusual transaction pat-numerically.
PI-in-a-Box: A Knowledge-Based System for Space Science Experimentation
Franier, Richard, Groleau, Nicholas, Hazelton, Lyman, Colombano, Silvano, Compton, Michael, Statler, Irving, Szolovits, Peter, Young, Laurence
The principal investigator (PI)-IN-A-BOX knowledge based system helps astronauts perform science experiments in space. These experiments are typically costly to devise and build and often are difficult to perform. Further, the space laboratory environment is unique; ever changing; hectic; and, therefore, stressful. The environment requires quick, correct reactions to events over a wide range of experiments and disciplines, including ones distant from an astronaut's main science specialty. This environment suggests the use of advanced techniques for data collection, analysis, and decision making to maximize the value of the research performed. PI-IN-A-BOX aids astronauts with quick-look data collection, reduction, and analysis as well as equipment diagnosis and troubleshooting, procedural reminders, and suggestions for high-value departures from the preplanned experiment protocol. The astronauts have direct access to the system, which is hosted on a portable computer in the Space Lab module. The system is in use on the ground for mission training and was used in flight during the October 1993 space life sciences 2 (SLS-2) shuttle mission.
Improving Performance in Neural Networks Using a Boosting Algorithm
Drucker, Harris, Schapire, Robert, Simard, Patrice
A boosting algorithm converts a learning machine with error rate less than 50% to one with an arbitrarily low error rate. However, the algorithm discussed here depends on having a large supply of independent training samples. We show how to circumvent this problem and generate an ensemble of learning machines whose performance in optical character recognition problems is dramatically improved over that of a single network. We report the effect of boosting on four databases (all handwritten) consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service (USPS) and the following from the National Institute of Standards and Testing (NIST): 220,000 digits, 45,000 upper case alphas, and 45,000 lower case alphas. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected.
Improving Performance in Neural Networks Using a Boosting Algorithm
Drucker, Harris, Schapire, Robert, Simard, Patrice
A boosting algorithm converts a learning machine with error rate less than 50% to one with an arbitrarily low error rate. However, the algorithm discussed here depends on having a large supply of independent training samples. We show how to circumvent this problem and generate an ensemble of learning machines whose performance in optical character recognition problems is dramatically improved over that of a single network. We report the effect of boosting on four databases (all handwritten) consisting of 12,000 digits from segmented ZIP codes from the United State Postal Service (USPS) and the following from the National Institute of Standards and Testing (NIST): 220,000 digits, 45,000 upper case alphas, and 45,000 lower case alphas. We use two performance measures: the raw error rate (no rejects) and the reject rate required to achieve a 1% error rate on the patterns not rejected.
Efficient Pattern Recognition Using a New Transformation Distance
Simard, Patrice, LeCun, Yann, Denker, John S.
Memory-based classification algorithms such as radial basis functions orK-nearest neighbors typically rely on simple distances (Euclidean, dotproduct ...), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rotation, scaling,shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.