Shaping the State Space Landscape in Recurrent Networks

Neural Information Processing Systems

Fully recurrent (asymmetrical) networks can be thought of as dynamic systems. The dynamics can be shaped to perform content addressable memories, recognize sequences, or generate trajectories. Unfortunately several problems can arise: First, the convergence in the state space is not guaranteed. Second, the learned fixed points or trajectories are not necessarily stable. Finally, there might exist spurious fixed points and/or spurious "attracting" trajectories that do not correspond to any patterns.


Constructing Hidden Units using Examples and Queries

Neural Information Processing Systems

While the network loading problem for 2-layer threshold nets is NPhard when learning from examples alone (as with backpropagation), (Baum, 91) has now proved that a learner can employ queries to evade the hidden unit credit assignment problem and PACload nets with up to four hidden units in polynomial time. Empirical tests show that the method can also learn far more complicated functions such as randomly generated networks with 200 hidden units. The algorithm easily approximates Wieland's 2-spirals function using a single layer of 50 hidden units, and requires only 30 minutes of CPU time to learn 200-bit parity to 99.7% accuracy.


On the Circuit Complexity of Neural Networks

Neural Information Processing Systems

Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.


Natural Dolphin Echo Recognition Using an Integrator Gateway Network

Neural Information Processing Systems

We have been studying the performance of a bottlenosed dolphin on a delayed matching-to-sample task to gain insight into the processes and mechanisms that the animal uses during echolocation. The dolphin recognizes targets by emitting natural sonar signals and listening to the echoes that return. This paper describes a novel neural network architecture, called an integrator gateway network, that we have developed to account for this performance. The integrator gateway network combines information from multiple echoes to classify targets with about 90% accuracy. In contrast, a standard backpropagation network performed with only about 63% accuracy.


Learning to See Rotation and Dilation with a Hebb Rule

Neural Information Processing Systems

Sereno, 1987) showed that a feedforward network with area VIlike input-layer units and a Hebb rule can develop area MTlike second layer units that solve the aperture problem for pattern motion. The present study extends this earlier work to more complex motions. Saito et al. (1986) showed that neurons with large receptive fields in macaque visual area MST are sensitive to different senses of rotation and dilation, irrespective of the receptive field location of the movement singularity. A network with an MTlike second layer was trained and tested on combinations of rotating, dilating, and translating patterns. Third-layer units learn to detect specific senses of rotation or dilation in a position-independent fashion, despite having position-dependent direction selectivity within their receptive fields.


Compact EEPROM-based Weight Functions

Neural Information Processing Systems

The recent surge of interest in neural networks and parallel analog computation has motivated the need for compact analog computing blocks. Analog weighting is an important computational function of this class. Analog weighting is the combining of two analog values, one of which is typically varying (the input) and one of which is typically fixed (the weight) or at least varying more slowly. The varying value is "weighted" by the fixed value through the "weighting function", typically multiplication. Analog weighting is most interesting when the overall computational task involves computing the "weighted sum of the inputs."


Planning with an Adaptive World Model

Neural Information Processing Systems

We present a new connectionist planning method [TML90]. By interaction with an unknown environment, a world model is progressively constructed using gradient descent. For deriving optimal actions with respect to future reinforcement, planning is applied in two steps: an experience network proposes a plan which is subsequently optimized by gradient descent with a chain of world models, so that an optimal reinforcement may be obtained when it is actually run. The appropriateness of this method is demonstrated by a robotics application and a pole balancing task.


Modeling Time Varying Systems Using Hidden Control Neural Architecture

Neural Information Processing Systems

This paper introduces a generalization of the layered neural network that can implement a time-varying nonlinear mapping between its observable input and output. The variation of the network's mapping is due to an additional, hidden control input, while the network parameters remain unchanged. We proposed an algorithm for finding the network parameters and the hidden control sequence from a training set of examples of observable input and output. This algorithm implements an approximate maximum likelihood estimation of parameters of an equivalent statistical model, when only the dominant control sequence is taken into account. The conceptual difference between the proposed model and the HMM is that in the HMM approach, the observable data in each of the states is modeled as though it was produced by a memoryless source, and a parametric description of this source is obtained during training, while in the proposed model the observations in each state are produced by a nonlinear dynamical system driven by noise, and both the parametric form of the dynamics and the noise are estimated. The perfonnance of the model was illustrated for the tasks of nonlinear time-varying system modeling and continuously spoken digit recognition. The reported results show the potential of this model for providing high performance speech recognition capability. Acknowledgment Special thanks are due to N. Merhav for numerous comments and helpful discussions.


Associative Memory in a Network of `Biological' Neurons

Neural Information Processing Systems

The Hopfield network (Hopfield, 1982,1984) provides a simple model of an associative memory in a neuronal structure. This model, however, is based on highly artificial assumptions, especially the use of formal-two state neurons (Hopfield, 1982) or graded-response neurons (Hopfield, 1984).


Spherical Units as Dynamic Consequential Regions: Implications for Attention, Competition and Categorization

Neural Information Processing Systems

Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model (Gluck & Bower 1988): (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning.