Not enough data to create a plot.
Try a different view from the menu above.
Government
Applied AI News
The US Army has installed PRIDE Merlin is an expert system developed (Pulse Radar Intelligent Diagnostic at Hewlett Packard's Networked Environment), a diagnostic expert Computer Manufacturing Operation system developed by Carnegie Group (Roseville, CA) to forecast the factory's (Pittsburgh, PA), in Saudi Arabia in product demand. Lucid (Menlo Park, CA), producer of American Airlines (Dallas, TX) has the Lucid Common Lisp language, developed an expert system - Maintenance has acquired Peritus, a producer of Operation Control Advisor C/C and FORTRAN compilers. Consolidated Edison (New York, Nova Technology (Bethesda, MD), a NY) has developed the SOCCS Alarm new company founded by Naval Advisor, an expert system that recommends Research Center scientist Harold Szu, operator actions required plans to commercialize neural networks to maintain the necessary and continuous made from high-performance power supply to its customers. Kurzweil AI (Waltham, MA) has Inference (El Segundo, CA) has received a federal grant to develop named Peter Tierney CEO and president. VoiceGI, a voice-activated reporting Tierney was formerly VP of and database management system marketing at Oracle.
Case-Based Reasoning: A Research Paradigm
Expertise comprises experience. In solving a new problem, we rely on past episodes. We need to remember what plans succeed and what plans fail. We need to know how to modify an old plan to fit a new situation. Case-based reasoning is a general paradigm for reasoning from experience. It assumes a memory model for representing, indexing, and organizing past cases and a process model for retrieving and modifying old cases and assimilating new ones. Case-based reasoning provides a scientific cognitive model. The research issues for case-based reasoning include the representation of episodic knowledge, memory organization, indexing, case modification, and learning. In addition, computer implementations of case-based reasoning address many of the technological shortcomings of standard rule-based expert systems. These engineering concerns include knowledge acquisition and robustness. In this article, I review the history of case-based reasoning, including research conducted at the Yale AI Project and elsewhere.
A Survey of the Eighth National Conference on Artificial Intelligence: Pulling Together or Pulling Apart?
Fields 3-8 of table 1 of the survey and general results, a discussion represent purposes, specifically, to define of the four hypotheses, and two sections models (field 3), prove theorems about the at the end of the article that contain details of models (field 4), present algorithms (field 5), the survey and statistical analyses. The next analyze algorithms (field 6), present systems section (The Survey) briefly describes the 16 or architectures (field 7), and analyze them substantive questions I asked about each (field 8). These purposes are not mutually paper. One of the closing sections (An Explanation exclusive; for example, many papers that of the Fields in Table 1) discusses the present models also prove theorems about criteria for answering the survey questions the models.
Handwritten Digit Recognition with a Back-Propagation Network
LeCun, Yann, Boser, Bernhard E., Denker, John S., Henderson, Donnie, Howard, R. E., Hubbard, Wayne E., Jackel, Lawrence D.
We present an application of back-propagation networks to handwritten digitrecognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service. 1 INTRODUCTION The main point of this paper is to show that large back-propagation (BP) networks canbe applied to real image-recognition problems without a large, complex preprocessing stage requiring detailed engineering. Unlike most previous work on the subject (Denker et al., 1989), the learning network is directly fed with images, rather than feature vectors, thus demonstrating the ability of BP networks to deal with large amounts of low level information. Previous work performed on simple digit images (Le Cun, 1989) showed that the architecture of the network strongly influences the network's generalization ability. Good generalization can only be obtained by designing a network architecture that contains a certain amount of a priori knowledge about the problem. The basic design principleis to minimize the number of free parameters that must be determined by the learning algorithm, without overly reducing the computational power of the network.
The Truth, the Whole Truth, and Nothing But the Truth
Truth maintenance is a collection of techniques for doing belief revision. A truth maintenance system's task is to maintain a set of beliefs in such a way that they are not known to be contradictory and no belief is kept without a reason. Truth maintenance systems were introduced in the late seventies by Jon Doyle and in the last five years there has been an explosion of interest in this kind of systems. In this paper we present an annotated bibliography to the literature of truth maintenance systems, grouping the works referenced according to several classifications.
Full-Sized Knowledge-Based Systems Research Workshop
Silverman, Barry G., Murray, Arthur J.
The Full-Sized Knowledge-Based Systems Research Workshop was held May 7-8, 1990 in Washington, D.C., as part of the AI Systems in Government Conference sponsored by IEEE Computer Society, Mitre Corporation and George Washington University in cooperation with AAAI. The goal of the workshop was to convene an international group of researchers and practitioners to share insights into the problems of building and deploying Full-Sized Knowledge Based Systems (FSKBSs).
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499
Unsupervised Learning in Neurodynamics Using the Phase Velocity Field Approach
Zak, Michail, Toomarian, Nikzad Benny
A new concept for unsupervised learning based upon examples introduced to the neural network is proposed. Each example is considered as an interpolation node of the velocity field in the phase space. The velocities at these nodes are selected such that all the streamlines converge to an attracting set imbedded in the subspace occupied by the cluster of examples. The synaptic interconnections are found from learning procedure providing selected field. The theory is illustrated by examples. This paper is devoted to development of a new concept for unsupervised learning based upon examples introduced to an artificial neural network.
Time Dependent Adaptive Neural Networks
Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.