Analog Neural Networks as Decoders
Erlanson, Ruth, Abu-Mostafa, Yaser
In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting suchKWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTAnetworks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.
Shaping the State Space Landscape in Recurrent Networks
Simard, Patrice, Raysz, Jean Pierre, Victorri, Bernard
Bernard Victorri ELSAP Universite de Caen 14032 Caen Cedex France Fully recurrent (asymmetrical) networks can be thought of as dynamic systems. The dynamics can be shaped to perform content addressable memories, recognize sequences, or generate trajectories. Unfortunately several problems can arise: First, the convergence in the state space is not guaranteed. Second, the learned fixed points or trajectories are not necessarily stable. Finally, there might exist spurious fixed points and/or spurious "attracting" trajectories that do not correspond to any patterns.
Evaluation of Adaptive Mixtures of Competing Experts
Nowlan, Steven J., Hinton, Geoffrey E.
We compare the performance of the modular architecture, composed of competing expert networks, suggested by Jacobs, Jordan, Nowlan and Hinton (1991) to the performance of a single back-propagation network on a complex, but low-dimensional, vowel recognition task. Simulations reveal that this system is capable of uncovering interesting decompositions in a complex task. The type of decomposition is strongly influenced by the nature of the input to the gating network that decides which expert to use for each case. The modular architecture also exhibits consistently better generalization on many variations of the task. 1 Introduction If back-propagation is used to train a single, multilayer network to perform different subtasks on different occasions, there will generally be strong interference effects which lead to slow learning and poor generalization. If we know in advance that a set of training cases may be naturally divideJ into subsets that correspond to distinct subtasks, interference can be reduced by using a system (see Figure 1) composed of several different "expert" networks plus a gating network that decides which of the experts should be used for each training case. Systems of this type have been suggested by a number of authors (Hampshire and Waibel, 1989; Jacobs, Jordan and Barto, 1990; Jacobs et al., 1991) (see also the paper by Jacobs and Jordan in this volume (1991ยป.
Exploratory Feature Extraction in Speech Signals
A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing multimodality is presented, and its connection toexploratory projection pursuit methods is discussed. This leads to a new statistical insight to the synaptic modification equations governing learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982). The importance of a dimensionality reduction principle based solely on distinguishing features, is demonstrated using a linguistically motivated phoneme recognition experiment, and compared with feature extraction using back-propagation network. 1 Introduction Due to the curse of dimensionality (Bellman, 1961) it is desirable to extract features froma high dimensional data space before attempting a classification. How to perform this feature extraction/dimensionality reduction is not that clear. A first simplification is to consider only features defined by linear (or semi-linear) projections ofhigh dimensional data.
Direct memory access using two cues: Finding the intersection of sets in a connectionist model
Wiles, Janet, Humphreys, Michael S., Bain, John D., Dennis, Simon
For lack of alternative models, search and decision processes have provided the dominant paradigm for human memory access using two or more cues, despite evidence against search as an access process (Humphreys, Wiles & Bain, 1990). We present an alternative process to search, based on calculating the intersection of sets of targets activated by two or more cues. Two methods of computing the intersection are presented, one using information about the possible targets, the other constraining the cue-target strengths in the memory matrix. Analysis using orthogonal vectors to represent the cues and targets demonstrates the competence of both processes, and simulations using sparse distributed representations demonstrate the performance of the latter process for tasks involving 2 and 3 cues.
Phonetic Classification and Recognition Using the Multi-Layer Perceptron
Leung, Hong C., Glass, James R., Phillips, Michael S., Zue, Victor W.
In this paper, we will describe several extensions to our earlier work, utilizing asegment-based approach. We will formulate our segmental framework and report our study on the use of multi-layer perceptrons for detection and classification of phonemes. We will also examine the outputs of the network, and compare the network performance with other classifiers. Our investigation is performed within a set of experiments that attempts to recognize 38 vowels and consonants in American English independent of speaker.
The Devil and the Network: What Sparsity Implies to Robustness and Memory
Biswas, Sanjay, Venkatesh, Santosh S.
Robustness is a commonly bruited property of neural networks; in particular, afolk theorem in neural computation asserts that neural networks-in contexts with large interconnectivity-continue to function efficiently, albeit withsome degradation, in the presence of component damage or loss. A second folk theorem in such contexts asserts that dense interconnectivity betweenneural elements is a sine qua non for the efficient usage of resources. These premises are formally examined in this communication in a setting that invokes the notion of the "devil"
Flight Control in the Dragonfly: A Neurobiological Simulation
Faller, William E., Luttges, Marvin W.
Neural network simulations of the dragonfly flight neurocontrol system have been developed to understand how this insect uses complex, unsteady aerodynamics. The simulation networks account for the ganglionic spatial distribution of cells as well as the physiologic operating range and the stochastic cellular fIring history of each neuron. In addition the motor neuron firing patterns, "flight command sequences", were utilized. Simulation training was targeted against both the cellular and flight motor neuron firing patterns. The trained networks accurately resynthesized the intraganglionic cellular firing patterns. These in tum controlled the motor neuron fIring patterns that drive wing musculature during flight. Such networks provide both neurobiological analysis tools and fIrst generation controls for the use of "unsteady" aerodynamics.
Using Genetic Algorithms to Improve Pattern Classification Performance
Chang, Eric I., Lippmann, Richard P.
Feature selection and creation are two of the most important and difficult tasks in the field of pattern classification. Good features improve the performance of both conventional and neural network pattern classifiers. Exemplar selection is another task that can reduce the memory and computation requirements of a KNN classifier. These three tasks require a search through a space which is typically so large that 797 798 Chang and Lippmann exhaustive search is impractical. The purpose of this research was to explore the usefulness of Genetic search algorithms for these tasks.