Country
Combined Neural Networks for Time Series Analysis
We propose a method for improving the performance of any network designed to predict the next value of a time series. Vve advocate analyzing the deviations of the network's predictions from the data in the training set. This can be carried out by a secondary network trained on the time series of these residuals. The combined system of the two networks is viewed as the new predictor. We demonstrate the simplicity and success of this method, by applying it to the sunspots data. The small corrections of the secondary network can be regarded as resulting from a Taylor expansion of a complex network which includes the combined system.
Packet Routing in Dynamically Changing Networks: A Reinforcement Learning Approach
Boyan, Justin A., Littman, Michael L.
The field of reinforcement learning has grown dramatically over the past several years, but with the exception of backgammon [8, 2], has had few successful applications to large-scale, practical tasks. This paper demonstrates that the practical task of routing packets through a communication network is a natural application for reinforcement learning algorithms.
Clustering with a Domain-Specific Distance Measure
Gold, Steven, Mjolsness, Eric, Rangarajan, Anand
The distance measure and learning problem are formally described as nested objective functions. We derive an efficient algorithm by using optimization techniques that allow us to divide up the objective function into parts which may be minimized in distinct phases. The algorithm has accurately recreated 10 prototypes from a randomly generated sample database of 100 images consisting of 20 points each in 120 experiments. Finally, by incorporating permutation invariance in our distance measure, we have a technique that we may be able to apply to the clustering of graphs. Our goal is to develop measures which will enable the learning of objects with shape or structure. Acknowledgements This work has been supported by AFOSR grant F49620-92-J-0465 and ONR/DARPA grant N00014-92-J-4048.
Functional Models of Selective Attention and Context Dependency
Scope This workshop reviewed and classified the various models which have emerged from the general concept of selective attention and context dependency, and sought to identify their commonalities. It was concluded that the motivation and mechanism of these functional models are "efficiency" and ''factoring'', respectively. The workshop focused on computational models of selective attention and context dependency within the realm of neural networks. We treated only ''functional'' models; computational models of biological neural systems, and symbolic or rule-based systems were omitted from the discussion. Presentations Thomas H. Hildebrandt presented the results of his recent survey of the literature on functional models of selective attention and context dependency.
Directional Hearing by the Mauthner System
Guzik, Audrey L., Eaton, Robert C.
The most prominent feature of this network is the pair of large Mauthner cells whose axons cross the midline and descend down the spinal cord to synapse on primary motoneurons. The Mauthner system also includes inhibitory neurons, the PHP cells, which have a unique and intense field effect inhibition at the spikeinitiating zone of the Mauthner cells (Faber and Korn, 1978). The Mauthner system is part of the full brainstem escape network which also includes two pairs of cells homologous to the Mauthner cell and other populations of reticulospinal neurons. With this network fish initiate escapes only from appropriate stimuli, turn away from the offending stimulus, and do so very rapidly with a latency around 15 msec in goldfish. The Mauthner cells play an important role in these functions.
Signature Verification using a "Siamese" Time Delay Neural Network
Bromley, Jane, Guyon, Isabelle, LeCun, Yann, Säckinger, Eduard, Shah, Roopak
The aim of the project was to make a signature verification system based on the NCR 5990 Signature Capture Device (a pen-input tablet) and to use 80 bytes or less for signature feature storage in order that the features can be stored on the magnetic strip of a credit-card. Verification using a digitizer such as the 5990, which generates spatial coordinates as a function of time, is known as dynamic verification. Much research has been carried out on signature verification.
Discontinuous Generalization in Large Committee Machines
The problem of learning from examples in multilayer networks is studied within the framework of statistical mechanics. Using the replica formalism we calculate the average generalization error of a fully connected committee machine in the limit of a large number of hidden units. If the number of training examples is proportional to the number of inputs in the network, the generalization error as a function of the training set size approaches a finite value. If the number of training examples is proportional to the number of weights in the network we find first-order phase transitions with a discontinuous drop in the generalization error for both binary and continuous weights. 1 INTRODUCTION Feedforward neural networks are widely used as nonlinear, parametric models for the solution of classification tasks and function approximation. Trained from examples of a given task, they are able to generalize, i.e. to compute the correct output for new, unknown inputs.
Connectionism for Music and Audition
In recent years, NIPS has heard neural networks generate tunes and harmonize chorales. With a large amount of music becoming available in computer readable form, real data can be used to train connectionist models. At the beginning of this workshop, Andreas Weigend focused on architectures to capture structure on multiple time scales.