Goto

Collaborating Authors

 Country



Constructive Learning Using Internal Representation Conflicts

Neural Information Processing Systems

The first class of network adaptation algorithms start out with a redundant architecture and proceed by pruning away seemingly unimportant weights (Sietsma and Dow, 1988; Le Cun et aI, 1990). A second class of algorithms starts off with a sparse architecture and grows the network to the complexity required by the problem. Several algorithms have been proposed for growing feedforward networks. The upstart algorithm of Frean (1990) and the cascade-correlation algorithm of Fahlman (1990) are examples of this approach.


Coupled Dynamics of Fast Neurons and Slow Interactions

Neural Information Processing Systems

A simple model of coupled dynamics of fast neurons and slow interactions, modelling self-organization in recurrent neural networks, leads naturally to an effective statistical mechanics characterized by a partition function which is an average over a replicated system. This is reminiscent of the replica trick used to study spin-glasses, but with the difference that the number of replicas has a physical meaning as the ratio of two temperatures and can be varied throughout the whole range of real values. The model has interesting phase consequences as a function of varying this ratio and external stimuli, and can be extended to a range of other models. As the basic archetypal model we consider a system of Ising spin neurons (J'i E {-I, I}, i E {I,..., N}, interacting via continuous-valued symmetric interactions, Iij, which themselves evolve in response to the states of the neurons. JijO"iO"j (2) i j and the subscript {Jij} indicates that the {Jij} are to be considered as quenched variables.


Connectionist Models for Auditory Scene Analysis

Neural Information Processing Systems

Although the visual and auditory systems share the same basic tasks of informing an organism about its environment, most connectionist work on hearing to date has been devoted to the very different problem of speech recognition. VVe believe that the most fundamental task of the auditory system is the analysis of acoustic signals into components corresponding to individual sound sources, which Bregman has called auditory scene analysis. Computational and connectionist work on auditory scene analysis is reviewed, and the outline of a general model that includes these approaches is described.


Neural Network Methods for Optimization Problems

Neural Information Processing Systems

In a talk entitled "Trajectory Control of Convergent Networks with applications to TSP", Natan Peterfreund (Computer Science, Technion) dealt with the problem of controlling the trajectories of continuous convergent neural networks models for solving optimization problems, without affecting their equilibria set and their convergence properties. Natan presented a class of feedback control functions which achieve this objective, while also improving the convergence rates. A modified Hopfield and Tank neural network model, developed through the proposed feedback approach, was found to substantially improve the results of the original model in solving the Traveling Salesman Problem. The proposed feedback overcame the 2n symmetric property of the TSP problem. In a talk entitled "Training Feedforward Neural Networks quickly and accurately using Very Fast Simulated Reannealing Methods", Bruce Rosen (Asst.


Convergence of Stochastic Iterative Dynamic Programming Algorithms

Neural Information Processing Systems

Increasing attention has recently been paid to algorithms based on dynamic programming (DP) due to the suitability of DP for learning problems involving control. In stochastic environments where the system being controlled is only incompletely known, however, a unifying theoretical account of these methods has been missing. In this paper we relate DPbased learning algorithms to the powerful techniques of stochastic approximation via a new convergence theorem, enabling us to establish a class of convergent algorithms to which both TD("\) and Q-Iearning belong. 1 INTRODUCTION Learning to predict the future and to find an optimal way of controlling it are the basic goals of learning systems that interact with their environment. A variety of algorithms are currently being studied for the purposes of prediction and control in incompletely specified, stochastic environments. Here we consider learning algorithms defined in Markov environments. There are actions or controls (u) available for the learner that affect both the state transition probabilities, and the probability distribution for the immediate, state dependent costs (Ci(u)) incurred by the learner.


Analyzing Cross-Connected Networks

Neural Information Processing Systems

The nonlinear complexities of neural networks make network solutions difficult to understand. Sanger's contribution analysis is here extended to the analysis of networks automatically generated by the cascadecorrelation learning algorithm. Because such networks have cross connections that supersede hidden layers, standard analyses of hidden unit activation patterns are insufficient. A contribution is defined as the product of an output weight and the associated activation on the sending unit, whether that sending unit is an input or a hidden unit, multiplied by the sign of the output target for the current input pattern. Intercorrelations among contributions, as gleaned from the matrix of contributions x input patterns, can be subjected to principal components analysis (PCA) to extract the main features of variation in the contributions. Such an analysis is applied to three problems, continuous XOR, arithmetic comparison, and distinguishing between two interlocking spirals. In all three cases, this technique yields useful insights into network solutions that are consistent across several networks.


Foraging in an Uncertain Environment Using Predictive Hebbian Learning

Neural Information Processing Systems

Survival is enhanced by an ability to predict the availability of food, the likelihood of predators, and the presence of mates. We present a concrete model that uses diffuse neurotransmitter systems to implement a predictive version of a Hebb learning rule embedded in a neural architecture based on anatomical and physiological studies on bees. The model captured the strategies seen in the behavior of bees and a number of other animals when foraging in an uncertain environment. The predictive model suggests a unified way in which neuromodulatory influences can be used to bias actions and control synaptic plasticity. Successful predictions enhance adaptive behavior by allowing organisms to prepare for future actions, rewards, or punishments. Moreover, it is possible to improve upon behavioral choices if the consequences of executing different actions can be reliably predicted. Although classical and instrumental conditioning results from the psychological literature [1] demonstrate that the vertebrate brain is capable of reliable prediction, how these predictions are computed in brains is not yet known. The brains of vertebrates and invertebrates possess small nuclei which project axons throughout large expanses of target tissue and deliver various neurotransmitters such as dopamine, norepinephrine, and acetylcholine [4]. The activity in these systems may report on reinforcing stimuli in the world or may reflect an expectation of future reward [5, 6,7,8].


Adaptive knot Placement for Nonparametric Regression

Neural Information Processing Systems

We show how an "Elman" network architecture, constructed from recurrently connected oscillatory associative memory network modules, can employ selective "attentional" control of synchronization to direct the flow of communication and computation within the architecture to solve a grammatical inference problem. Previously we have shown how the discrete time "Elman" network algorithm can be implemented in a network completely described by continuous ordinary differential equations. The time steps (machine cycles) of the system are implemented by rhythmic variation (clocking) of a bifurcation parameter. In this architecture, oscillation amplitude codes the information content or activity of a module (unit), whereas phase and frequency are used to "softwire" the network. Only synchronized modules communicate by exchanging amplitude information; the activity of non-resonating modules contributes incoherent crosstalk noise. Attentional control is modeled as a special subset of the hidden modules with ouputs which affect the resonant frequencies of other hidden modules. They control synchrony among the other modules and direct the flow of computation (attention) to effect transitions between two subgraphs of a thirteen state automaton which the system emulates to generate a Reber grammar. The internal crosstalk noise is used to drive the required random transitions of the automaton.


Optimal Signalling in Attractor Neural Networks

Neural Information Processing Systems

It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our