Plotting

 Government


Knowledge Discovery in Real Databases: A Report on the IJCAI-89 Workshop

AI Magazine

The growth in the amount of available databases far outstrips the growth of corresponding knowledge. This creates both a need and an opportunity for extracting knowledge from databases. Many recent results have been reported on extracting different kinds of knowledge from databases, including diagnostic rules, drug side effects, classes of stars, rules for expert systems, and rules for semantic query optimization.


Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks

Neural Information Processing Systems

A methodology for faster supervised learning in dynamical nonlinear neuralnetworks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response dueto perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methodsfor calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation ofefficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function.


Predicting Weather Using a Genetic Memory: A Combination of Kanerva's Sparse Distributed Memory with Holland's Genetic Algorithms

Neural Information Processing Systems

Kanerva's sparse distributed memory (SDM) is an associative-memory modelbased on the mathematical properties of high-dimensional binary address spaces. Holland's genetic algorithms are a search technique forhigh-dimensional spaces inspired by evolutionary processes of DNA. "Genetic Memory" is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure itsphysical storage locations to reflect correlations between the stored addresses and data. For example, when presented with raw weather station data, the Genetic Memory discovers specific features inthe weather data which correlate well with upcoming rain, and reconfigures the memory to utilize this information effectively. This architecture is designed to maximize the ability of the system to scale-up to handle real-world problems.



Unsupervised Learning in Neurodynamics Using the Phase Velocity Field Approach

Neural Information Processing Systems

A new concept for unsupervised learning based upon examples introduced tothe neural network is proposed. Each example is considered as an interpolation node of the velocity field in the phase space. The velocities at these nodes are selected such that all the streamlines converge to an attracting set imbedded in the subspace occupied by the cluster of examples. The synaptic interconnections are found from learning procedure providing selected field. The theory is illustrated by examples. This paper is devoted to development of a new concept for unsupervised learning based upon examples introduced to an artificial neural network.


Contour-Map Encoding of Shape for Early Vision

Neural Information Processing Systems

Pentti Kanerva Research Institute for Advanced Computer Science Mail Stop 230-5, NASA Ames Research Center Moffett Field, California 94035 ABSTRACT Contour maps provide a general method for recognizing two-dimensional shapes. All but blank images give rise to such maps, and people are good at recognizing objects and shapes from them. The maps are encoded easily in long feature vectors that are suitable for recognition by an associative memory. These properties of contour maps suggest a role for them in early visual perception. The prevalence of direction-sensitive neurons in the visual cortex of mammals supports this view.


Time Dependent Adaptive Neural Networks

Neural Information Processing Systems

Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.


Predicting Weather Using a Genetic Memory: A Combination of Kanerva's Sparse Distributed Memory with Holland's Genetic Algorithms

Neural Information Processing Systems

Kanerva's sparse distributed memory (SDM) is an associative-memory model based on the mathematical properties of high-dimensional binary address spaces. Holland's genetic algorithms are a search technique for high-dimensional spaces inspired by evolutionary processes of DNA. "Genetic Memory" is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure its physical storage locations to reflect correlations between the stored addresses and data. For example, when presented with raw weather station data, the Genetic Memory discovers specific features in the weather data which correlate well with upcoming rain, and reconfigures the memory to utilize this information effectively. This architecture is designed to maximize the ability of the system to scale-up to handle real-world problems.


Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks

Neural Information Processing Systems

A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499


Handwritten Digit Recognition with a Back-Propagation Network

Neural Information Processing Systems

We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 % error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service. 1 INTRODUCTION The main point of this paper is to show that large back-propagation (BP) networks can be applied to real image-recognition problems without a large, complex preprocessing stage requiring detailed engineering. Unlike most previous work on the subject (Denker et al., 1989), the learning network is directly fed with images, rather than feature vectors, thus demonstrating the ability of BP networks to deal with large amounts of low level information. Previous work performed on simple digit images (Le Cun, 1989) showed that the architecture of the network strongly influences the network's generalization ability. Good generalization can only be obtained by designing a network architecture that contains a certain amount of a priori knowledge about the problem. The basic design principle is to minimize the number of free parameters that must be determined by the learning algorithm, without overly reducing the computational power of the network.