Goto

Collaborating Authors

 Technology


Applications of Neural Networks in Video Signal Processing

Neural Information Processing Systems

Although color TV is an established technology, there are a number of longstanding problems for which neural networks may be suited. Impulse noise is such a problem, and a modular neural network approach is presented in this paper. The training and analysis was done on conventional computers, while real-time simulations were performed on a massively parallel computer called the Princeton Engine. The network approach was compared to a conventional alternative, a median filter. Real-time simulations and quantitative analysis demonstrated the technical superiority of the neural system. Ongoing work is investigating the complexity and cost of implementing this system in hardware.


Generalization Properties of Radial Basis Functions

Neural Information Processing Systems

Sherif M. Botros Christopher G. Atkeson Brain and Cognitive Sciences Department and the Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 Abstract We examine the ability of radial basis functions (RBFs) to generalize. We compare the performance of several types of RBFs. We use the inverse dynamics of an idealized two-joint arm as a test case. We find that without a proper choice of a norm for the inputs, RBFs have poor generalization properties. A simple global scaling of the input variables greatly improves performance.


e-Entropy and the Complexity of Feedforward Neural Networks

Neural Information Processing Systems

We are concerned with the problem of the number of nodes needed in a feedforward neural network in order to represent a fUllction to within a specified accuracy.


Direct memory access using two cues: Finding the intersection of sets in a connectionist model

Neural Information Processing Systems

For lack of alternative models, search and decision processes have provided the dominant paradigm for human memory access using two or more cues, despite evidence against search as an access process (Humphreys, Wiles & Bain, 1990). We present an alternative process to search, based on calculating the intersection of sets of targets activated by two or more cues. Two methods of computing the intersection are presented, one using information about the possible targets, the other constraining the cue-target strengths in the memory matrix. Analysis using orthogonal vectors to represent the cues and targets demonstrates the competence of both processes, and simulations using sparse distributed representations demonstrate the performance of the latter process for tasks involving 2 and 3 cues.


Shaping the State Space Landscape in Recurrent Networks

Neural Information Processing Systems

Fully recurrent (asymmetrical) networks can be thought of as dynamic systems. The dynamics can be shaped to perform content addressable memories, recognize sequences, or generate trajectories. Unfortunately several problems can arise: First, the convergence in the state space is not guaranteed. Second, the learned fixed points or trajectories are not necessarily stable. Finally, there might exist spurious fixed points and/or spurious "attracting" trajectories that do not correspond to any patterns.


A competitive modular connectionist architecture

Neural Information Processing Systems

We describe a multi-network, or modular, connectionist architecture that captures that fact that many tasks have structure at a level of granularity intermediate to that assumed by local and global function approximation schemes. The main innovation of the architecture is that it combines associative and competitive learning in order to learn task decompositions. A task decomposition is discovered by forcing the networks comprising the architecture to compete to learn the training patterns. As a result of the competition, different networks learn different training patterns and, thus, learn to partition the input space. The performance of the architecture on a "what" and "where" vision task and on a multi-payload robotics task are presented.


Simulation of the Neocognitron on a CCD Parallel Processing Architecture

Neural Information Processing Systems

The neocognitron is a neural network for pattern recognition and feature extraction. An analog CCD parallel processing architecture developed at Lincoln Laboratory is particularly well suited to the computational requirements of shared-weight networks such as the neocognitron, and implementation of the neocognitron using the CCD architecture was simulated. A modification to the neocognitron training procedure, which improves network performance under the limited arithmetic precision that would be imposed by the CCD architecture, is presented.


A Short-Term Memory Architecture for the Learning of Morphophonemic Rules

Neural Information Processing Systems

In the debate over the power of connectionist models to handle linguistic phenomena, considerable attention has been focused on the learning of simple morphological rules. It is a straightforward matter in a symbolic system to specify how the meanings of a stem and a bound morpheme combine to yield the meaning of a whole word and how the form of the bound morpheme depends on the shape of the stem. In a distributed connectionist system, however, where there may be no explicit morphemes, words, or rules, things are not so simple. The most important work in this area has been that of Rumelhart and McClelland (1986), together with later extensions by Marchman and Plunkett (1989). The networks involved were trained to associate English verb stems with the corresponding past-tense forms, successfully generating both regular and irregular forms and generalizing to novel inputs.


Analog Neural Networks as Decoders

Neural Information Processing Systems

In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.


Compact EEPROM-based Weight Functions

Neural Information Processing Systems

The recent surge of interest in neural networks and parallel analog computation has motivated the need for compact analog computing blocks. Analog weighting is an important computational function of this class. Analog weighting is the combining of two analog values, one of which is typically varying (the input) and one of which is typically fixed (the weight) or at least varying more slowly. The varying value is "weighted" by the fixed value through the "weighting function", typically multiplication. Analog weighting is most interesting when the overall computational task involves computing the "weighted sum of the inputs."