Goto

Collaborating Authors

 Technology


Temporal Representations in a Connectionist Speech System

Neural Information Processing Systems

Erich J. Smythe 207 Greenmanville Ave, #6 Mystic, CT 06355 ABSTRACT SYREN is a connectionist model that uses temporal information in a speech signal for syllable recognition. It classifies the rates and directions of formant center transitions, and uses an adaptive method to associate transition events with each syllable. The system uses explicit spatial temporal representations through delay lines.SYREN uses implicit parametric temporal representations informant transition classification through node activation onset, decay, and transition delays in sub-networks analogous to visual motion detector cells. SYREN recognizes 79% of six repetitions of24 consonant-vowel syllables when tested on unseen data, and recognizes 100% of its training syllables. INTRODUCTION Living organisms exist in a dynamic environment. Problem solving systems, both natural and synthetic, must relate and interpret events that occur over time. Although connectionist models are based on metaphors from the brain, few have been designed to capture temporal and sequential information common to even the most primitive nervous systems.



Fixed Point Analysis for Recurrent Networks

Neural Information Processing Systems

This paper provides a systematic analysis of the recurrent backpropagation (RBP)algorithm, introducing a number of new results. The main limitation of the RBP algorithm is that it assumes the convergence of the network to a stable fixed point in order to backpropagate the error signals. We show by experiment and eigenvalue analysis that this condition canbe violated and that chaotic behavior can be avoided. Next we examine the advantages of RBP over the standard backpropagation algorithm. RBPis shown to build stable fixed points corresponding to the input patterns. This makes it an appropriate tool for content addressable memories,one-to-many function learning, and inverse problems.


The Boltzmann Perceptron Network: A Multi-Layered Feed-Forward Network Equivalent to the Boltzmann Machine

Neural Information Processing Systems

The concept of the stochastic Boltzmann machine (BM) is auractive for decision making and pattern classification purposes since the probability of attaining the network states is a function of the network energy. Hence, the probability of attaining particular energy minima may be associated with the probabilities of making certain decisions (or classifications). However, because of its stochastic nature, the complexity of the BM is fairly high and therefore such networks are not very likely to be used in practice. In this paper we suggest a way to alleviate this drawback by converting the stochastic BMinto a deterministic network which we call the Boltzmann Perceptron Network(BPN). The BPN is functionally equivalent to the BM but has a feed-forward structure and low complexity.



Neural Network Recognizer for Hand-Written Zip Code Digits

Neural Information Processing Systems

This paper describes the construction of a system that recognizes hand-printed digits, using a combination of classical techniques and neural-net methods. The system has been trained and tested on real-world data, derived from zip codes seen on actual U.S. Mail. The system rejects a small percentage of the examples as unclassifiable, and achieves a very low error rate on the remaining examples. The system compares favorably with other state-of-the art recognizers. While some of the methods are specific to this task, it is hoped that many of the techniques will be applicable to a wide range of recognition tasks.


Implications of Recursive Distributed Representations

Neural Information Processing Systems

I will describe my recent results on the automatic development of fixedwidth recursivedistributed representations of variable-sized hierarchal data structures. One implication of this wolk is that certain types of AIstyle data-structures can now be represented in fixed-width analog vectors. Simple inferences can be perfonned using the type of pattern associations that neural networks excel at Another implication arises from noting that these representations become self-similar in the limit Once this door to chaos is opened.


Theory of Self-Organization of Cortical Maps

Neural Information Processing Systems

We have mathematically shown that cortical maps in the primary sensory cortices can be reproduced by using three hypotheses which have physiological basis and meaning. Here, our main focus is on ocular.dominance


Associative Learning via Inhibitory Search

Neural Information Processing Systems

ALVIS is a reinforcement-based connectionist architecture that learns associative maps in continuous multidimensional environments. Thediscovered locations of positive and negative reinforcements arerecorded in "do be" and "don't be" subnetworks, respectively. The outputs of the subnetworks relevant to the current goalare combined and compared with the current location to produce an error vector. This vector is backpropagated through a motor-perceptual mapping network.