Country
Discovering Discrete Distributed Representations with Iterative Competitive Learning
Competitive learning is an unsupervised algorithm that classifies input patterns intomutually exclusive clusters. In a neural net framework, each cluster is represented by a processing unit that competes with others in a winnertake-all poolfor an input pattern. I present a simple extension to the algorithm that allows it to construct discrete, distributed representations. Discrete representations are useful because they are relatively easy to analyze and their information content can readily be measured. Distributed representations areuseful because they explicitly encode similarity. The basic idea is to apply competitive learning iteratively to an input pattern, and after each stage to subtract from the input pattern the component that was captured in the representation at that stage. This component is simply the weight vector of the winning unit of the competitive pool. The subtraction procedure forces competitive pools at different stages to encode different aspects of the input. The algorithm is essentially the same as a traditional data compression technique knownas multistep vector quantization, although the neural net perspective suggestspotentially powerful extensions to that approach.
Connectionist Music Composition Based on Melodic and Stylistic Constraints
Mozer, Michael C., Soukup, Todd
We describe a recurrent connectionist network, called CONCERT, that uses a set of melodies written in a given style to compose new melodies in that style. CONCERT is an extension of a traditional algorithmic composition technique inwhich transition tables specify the probability of the next note as a function of previous context. A central ingredient of CONCERT is the use of a psychologically-grounded representation of pitch.
Continuous Speech Recognition by Linked Predictive Neural Networks
Tebelskis, Joe, Waibel, Alex, Petek, Bojan, Schmidbauer, Otto
We present a large vocabulary, continuous speech recognition system based on Linked Predictive Neural Networks (LPNN's). The system uses neural networksas predictors of speech frames, yielding distortion measures which are used by the One Stage DTW algorithm to perform continuous speech recognition. The system, already deployed in a Speech to Speech Translation system, currently achieves 95%, 58%, and 39% word accuracy on tasks with perplexity 5, 111, and 402 respectively, outperforming several simpleHMMs that we tested. We also found that the accuracy and speed of the LPNN can be slightly improved by the judicious use of hidden control inputs. We conclude by discussing the strengths and weaknesses of the predictive approach.
Development and Spatial Structure of Cortical Feature Maps: A Model Study
Obermayer, Klaus, Ritter, Helge, Schulten, Klaus
K. Schulten Beckman-Insti t ute University of Illinois Urbana, IL 61801 Feature selective cells in the primary visual cortex of several species are organized inhierarchical topographic maps of stimulus features like "position in visual space", "orientation" and" ocular dominance". In order to understand anddescribe their spatial structure and their development, we investigate aself-organizing neural network model based on the feature map algorithm. The model explains map formation as a dimension-reducing mapping from a high-dimensional feature space onto a two-dimensional lattice, such that "similarity" between features (or feature combinations) is translated into "spatial proximity" between the corresponding feature selective cells. The model is able to reproduce several aspects of the spatial structure of cortical maps in the visual cortex. 1 Introduction Cortical maps are functionally defined structures of the cortex, which are characterized byan ordered spatial distribution of functionally specialized cells along the cortical surface. In the primary visual area(s) the response properties of these cells must be described by several independent features, and there is a strong tendency to map combinations of these features onto the cortical surface in a way that translates "similarity" into "spatial proximity" of the corresponding feature selective cells (see e.g.
An Attractor Neural Network Model of Recall and Recognition
Ruppin, Eytan, Yeshurun, Yehezkel
This work presents an Attractor Neural Network (ANN) model of Recall andRecognition. It is shown that an ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the these two main aspects of memory access. Certain psychological phenomena are accounted for, including the effects of list-length, wordfrequency, presentationtime, context shift, and aging. Thereafter, the probabilities of successful Recall and Recognition are estimated, in order to possibly enable further quantitative examination of the model. 1 Motivation The goal of this paper is to demonstrate that a Hopfield-based [Hop82] ANN model can qualitatively account for a wide range of experimental psychological data pertaining tothe two main aspects of memory access, Recall and Recognition. Recall is defined as the ability to retrieve an item from a list of items (words) originally presented during a previous learning phase, given an appropriate cue (cued RecalQ, or spontaneously (free RecalQ. Recognition is defined as the ability to successfully acknowledge that a certain item has or has not appeared in the tutorial list learned before. The main prospects of ANN modeling is that some parameter values, that in former, 'classical' models of memory retrieval (see e.g.
ART2/BP architecture for adaptive estimation of dynamic processes
The goal has been to construct a supervised artificial neural network that learns incrementally an unknown mapping. As a result a network consisting ofa combination of ART2 and backpropagation is proposed and is called an "ART2/BP" network. The ART2 network is used to build and focus a supervised backpropagation network. The ART2/BP network has the advantage of being able to dynamically expand itself in response to input patterns containing new information. Simulation results show that the ART2/BP network outperforms a classical maximum likelihood method for the estimation of a discrete dynamic and nonlinear transfer function.
Exploiting Syllable Structure in a Connectionist Phonology Model
Touretzky, David S., Wheeler, Deirdre W.
In a previous paper (Touretzky & Wheeler, 1990a) we showed how adding a clustering operation to a connectionist phonology model produced a parallel processing accountof certain "iterative" phenomena. In this paper we show how the addition of a second structuring primitive, syllabification, greatly increases the power of the model. We present examples from a non-Indo-European language that appear to require rule ordering to at least a depth of four. By adding syllabification circuitryto structure the model's perception of the input string, we are able to handle these examples with only two derivational steps. We conclude that in phonology, derivation can be largely replaced by structuring.