Goto

Collaborating Authors

 Country


Propagation Filters in PDS Networks for Sequencing and Ambiguity Resolution

Neural Information Processing Systems

We present a Parallel Distributed Semantic (PDS) Network architecture that addresses the problems of sequencing and ambiguity resolution in natural language understanding. A PDS Network stores phrases and their meanings using multiple PDP networks, structured in the form of a semantic net. A mechanism called Propagation Filters is employed: (1) to control communication between networks, (2) to properly sequence the components of a phrase, and (3) to resolve ambiguities. Simulation results indicate that PDS Networks and Propagation Filters can successfully represent high-level knowledge, can be trained relatively quickly, and provide for parallel inferencing at the knowledge level. 1 INTRODUCTION Backpropagation has shown considerable potential for addressing problems in natural language processing (NLP). However, the traditional PDP [Rumelhart and McClelland, 1986] approach of using one (or a small number) of backprop networks for NLP has been plagued by a number of problems: (1) it has been largely unsuccessful at representing high-level knowledge, (2) the networks are slow to train, and (3) they are sequential at the knowledge level.


Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model

Neural Information Processing Systems

This work discusses various optimization techniques which were proposed in models for controlling arm movements. In particular, the minimum-muscle-tension-change model is investigated. A dynamic simulator of the monkey's arm, including seventeen single and double joint muscles, is utilized to generate horizontal hand movements. The hand trajectories produced by this algorithm are discussed.


Fast, Robust Adaptive Control by Learning only Forward Models

Neural Information Processing Systems

A large class of motor control tasks requires that on each cycle the controller is told its current state and must choose an action to achieve a specified, state-dependent, goal behaviour. This paper argues that the optimization of learning rate, the number of experimental control decisions before adequate performance is obtained, and robustness is of prime importance-if necessary at the expense of computation per control cycle and memory requirement. This is motivated by the observation that a robot which requires two thousand learning steps to achieve adequate performance, or a robot which occasionally gets stuck while learning, will always be undesirable, whereas moderate computational expense can be accommodated by increasingly powerful computer hardware. It is not unreasonable to assume the existence of inexpensive 100 Mflop controllers within a few years and so even processes with control cycles in the low tens of milliseconds will have millions of machine instructions in which to make their decisions. This paper outlines a learning control scheme which aims to make effective use of such computational power. 1 MEMORY BASED LEARNING Memory-based learning is an approach applicable to both classification and function learning in which all experiences presented to the learning box are explicitly remembered. The memory, Mem, is a set of input-output pairs, Mem {(Xl, YI), (X21 Y2),..., (Xb Yk)}.


Markov Random Fields Can Bridge Levels of Abstraction

Neural Information Processing Systems

Network vision systems must make inferences from evidential information across levels of representational abstraction, from low level invariants, through intermediate scene segments, to high level behaviorally relevant object descriptions. This paper shows that such networks can be realized as Markov Random Fields (MRFs). We show first how to construct an MRF functionally equivalent to a Hough transform parameter network, thus establishing a principled probabilistic basis for visual networks. Second, we show that these MRF parameter networks are more capable and flexible than traditional methods. In particular, they have a well-defined probabilistic interpretation, intrinsically incorporate feedback, and offer richer representations and decision capabilities.


Unsupervised learning of distributions on binary vectors using two layer networks

Neural Information Processing Systems

We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors. We analyze the class of probability distributions that can be modeled by such machines.


Learning Unambiguous Reduced Sequence Descriptions

Neural Information Processing Systems

Do you want your neural net algorithm to learn sequences? Do not limit yourself to conventional gradient descent (or approximations thereof). Instead, use your sequence learning algorithm (any will do) to implement the following method for history compression. No matter what your final goals are, train a network to predict its next input from the previous ones. Since only unpredictable inputs convey new information, ignore all predictable inputs but let all unexpected inputs (plus information about the time step at which they occurred) become inputs to a higher-level network of the same kind (working on a slower, self-adjusting time scale). Go on building a hierarchy of such networks.


Temporal Adaptation in a Silicon Auditory Nerve

Neural Information Processing Systems

Many auditory theorists consider the temporal adaptation of the auditory nerve a key aspect of speech coding in the auditory periphery. Experiments with models of auditory localization and pitch perception also suggest temporal adaptation is an important element of practical auditory processing. I have designed, fabricated, and successfully tested an analog integrated circuit that models many aspects of auditory nerve response, including temporal adaptation.



Decoding of Neuronal Signals in Visual Pattern Recognition

Neural Information Processing Systems

We have investigated the properties of neurons in inferior temporal (IT) cortex in monkeys performing a pattern matching task. Simple backpropagation networks were trained to discriminate the various stimulus conditions on the basis of the measured neuronal signal. We also trained networks to predict the neuronal response waveforms from the spatial patterns of the stimuli. The results indicate t.hat IT neurons convey temporally encoded information about both current and remembered patterns, as well as about their behavioral context.


A Comparison of Projection Pursuit and Neural Network Regression Modeling

Neural Information Processing Systems

Two projection based feedforward network learning methods for modelfree regression problems are studied and compared in this paper: one is the popular back-propagation learning (BPL); the other is the projection pursuit learning (PPL).