Goto

Collaborating Authors

 Neural Information Processing Systems


Neural Approach for TV Image Compression Using a Hopfield Type Network

Neural Information Processing Systems

ABSTRACT A self-organizing Hopfield network has been developed in the context of Vector Ouantiza -tion, aiming at compression of television images. The metastable states of the spin glass-like network are used as an extra storage resource using the Minimal Overlap learning rule (Krauth and Mezard 1987) to optimize the organization of the attractors. The sel f-organi zi ng scheme that we have devised results in the generation of an adaptive codebook for any qiven TV image. As in many applications they are unknown, the aim of this work is to develop a network capable to learn how to select its attractors. TV image compression using Vector Quantization (V.Q.)(Gray, 1984), a key issue for HOTV transmission, is a typical case, since the non neural algorithms which generate the list of codes (the codebookl are suboptimal.


An Analog VLSI Chip for Thin-Plate Surface Interpolation

Neural Information Processing Systems

Reconstructing a surface from sparse sensory data is a well-known problem iIi computer vision. This paper describes an experimental analog VLSI chip for smooth surface interpolation from sparse depth data. An eight-node ID network was designed in 3J.lm CMOS and successfully tested.


Using Backpropagation with Temporal Windows to Learn the Dynamics of the CMU Direct-Drive Arm II

Neural Information Processing Systems

K. Y. Goldberg and B. A. Pearlmutter School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT Computing the inverse dynamics of a robot ann is an active area of research in the control literature. We hope to learn the inverse dynamics by training a neural network on the measured response of a physical ann. The input to the network is a temporal window of measured positions; output is a vector of torques. We train the network on data measured from the first two joints of the CMU Direct-Drive Arm II as it moves through a randomly-generated sample of "pick-and-place" trajectories. We then test generalization with a new trajectory and compare its output with the torque measured at the physical arm.


Consonant Recognition by Modular Construction of Large Phonemic Time-Delay Neural Networks

Neural Information Processing Systems

Encouraged by these results we wanted to explore the question, how we might expand on these models to make them useful for the design of speech recognition systems. A problem that emerges as we attempt to apply neural network models to the full speech recognition problem is the problem of scaling. Simply extending neural networks to ever larger structures and retraining them as one monolithic net quickly exceeds the capabilities of the fastest and largest supercomputers. The search complexity of finding a good solutions in a huge space of possible network configurations also soon assumes unmanageable proportions. Moreover, having to decide on all possible classes for recognition ahead of time as well as collecting sufficient data to train such a large monolithic network is impractical to say the least. In an effort to extend our models from small recognition tasks to large scale speech recognition systems, we must therefore explore modularity and incremental learning as design strategies to break up a large learning task into smaller subtasks. Breaking up a large task into subtasks to be tackled by individual black boxes interconnected in ad hoc arrangements, on the other hand, would mean to abandon one of the most attractive aspects of connectionism: the ability to perform complex constraint satisfaction in a massively parallel and interconnected fashion, in view of an overall optimal perfonnance goal.


Temporal Representations in a Connectionist Speech System

Neural Information Processing Systems

Erich J. Smythe 207 Greenmanville Ave, #6 Mystic, CT 06355 ABSTRACT SYREN is a connectionist model that uses temporal information in a speech signal for syllable recognition. It classifies the rates and directions of formant center transitions, and uses an adaptive method to associate transition events with each syllable. The system uses explicit spatial temporal representations through delay lines. SYREN uses implicit parametric temporal representations in formant transition classification through node activation onset, decay, and transition delays in sub-networks analogous to visual motion detector cells. SYREN recognizes 79% of six repetitions of 24 consonant-vowel syllables when tested on unseen data, and recognizes 100% of its training syllables. INTRODUCTION Living organisms exist in a dynamic environment. Problem solving systems, both natural and synthetic, must relate and interpret events that occur over time.


What Size Net Gives Valid Generalization?

Neural Information Processing Systems

We address the question of when a network can be expected to generalize from m random training examples chosen from some arbitrary probability distribution, assuming that future test examples are drawn from the same distribution. Among our results are the following bounds on appropriate sample vs. network size.


A Back-Propagation Algorithm with Optimal Use of Hidden Units

Neural Information Processing Systems

The algorithm can automatically find optimal or nearly optimal architectures necessary to solve known Boolean functions, facilitate the interpretation of the activation of the remaining hidden units and automatically estimate the complexity of architectures appropriate for phonetic labeling problems. The general principle of the algorithm can also be adapted to different tasks: for example, it can be used to eliminate the [0, 0] local minimum of the [-1.


Scaling and Generalization in Neural Networks: A Case Study

Neural Information Processing Systems

The issues of scaling and generalization have emerged as key issues in current studies of supervised learning from examples in neural networks. Questions such as how many training patterns and training cycles are needed for a problem of a given size and difficulty, how to represent the inllUh and how to choose useful training exemplars, are of considerable theoretical and practical importance. Several intuitive rules of thumb have been obtained from empirical studies, but as yet there are few rigorous results. In this paper we summarize a study Qf generalization in the simplest possible case-perceptron networks learning linearly separable functions. The task chosen was the majority function (i.e. return a 1 if a majority of the input units are on), a predicate with a number of useful properties. We find that many aspects of.generalization in multilayer networks learning large, difficult tasks are reproduced in this simple domain, in which concrete numerical results and even some analytic understanding can be achieved.


Modeling the Olfactory Bulb - Coupled Nonlinear Oscillators

Neural Information Processing Systems

A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations produce a 35-60 Hz modulated activity coherent across the bulb, mimicing the observed field potentials. The decision states (for the odor information) here can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis and simulations show that a group of coupled nonlinear oscillators are responsible for the oscillatory activities determined by the odor input, and that the bulb, with appropriate inputs from higher centers, can enhance or suppress the sensitivity to partiCUlar odors. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex.


Cricket Wind Detection

Neural Information Processing Systems

A great deal of interest has recently been focused on theories concerning parallel distributed processing in central nervous systems. In particular, many researchers have become very interested in the structure and function of "computational maps" in sensory systems. As defined in a recent review (Knudsen et al, 1987), a "map" is an array of nerve cells, within which there is a systematic variation in the "tuning" of neighboring cells for a particular parameter. For example, the projection from retina to visual cortex is a relatively simple topographic map; each cortical hypercolumn itself contains a more complex "computational" map of preferred line orientation representing the angle of tilt of a simple line stimulus. The overall goal of the research in my lab is to determine how a relatively complex mapped sensory system extracts and encodes information from external stimuli.