Goto

Collaborating Authors

 Neural Information Processing Systems


Neural Computing with Small Weights

Neural Information Processing Systems

An important issue in neural computation is the dynamic range of weights in the neural networks. Many experimental results on learning indicate that the weights in the networks can grow prohibitively large with the size of the inputs. Here we address this issue by studying the tradeoffs between the depth and the size of weights in polynomial-size networks of linear threshold elements (LTEs). We show that there is an efficient way of simulating a network of LTEs with large weights by a network of LTEs with small weights. To prove these results, we use tools from harmonic analysis of Boolean functions.


Learning Global Direct Inverse Kinematics

Neural Information Processing Systems

S n, the robot has redundant degrees-of-freedom (dof's). In general, control objectives such as the positioning and orienting of the endeffector are specified with respect to task space coordinates; however, the manipulator is typica1ly controlled only in the configuration space.


Unsupervised Classifiers, Mutual Information and 'Phantom Targets

Neural Information Processing Systems

We derive criteria for training adaptive classifier networks to perform unsupervised data analysis. The first criterion turns a simple Gaussian classifier into a simple Gaussian mixture analyser. The second criterion, which is much more generally applicable, is based on mutual information.


Benchmarking Feed-Forward Neural Networks: Models and Measures

Neural Information Processing Systems

Existing metrics for the learning performance of feed-forward neural networks do not provide a satisfactory basis for comparison because the choice of the training epoch limit can determine the results of the comparison. I propose new metrics which have the desirable property of being independent of the training epoch limit. The efficiency measures the yield of correct networks in proportion to the training effort expended. The optimal epoch limit provides the greatest efficiency. The learning performance is modelled statistically, and asymptotic performance is estimated. Implementation details may be found in (Harney, 1992).


A Comparison of Projection Pursuit and Neural Network Regression Modeling

Neural Information Processing Systems

Two projection based feedforward network learning methods for modelfree regressionproblems are studied and compared in this paper: one is the popular back-propagation learning (BPL); the other is the projection pursuit learning (PPL).


Fault Diagnosis of Antenna Pointing Systems using Hybrid Neural Network and Signal Processing Models

Neural Information Processing Systems

Padhraic Smyth, J eft" Mellstrom Jet Propulsion Laboratory 238-420 California Institute of Technology Pasadena, CA 91109 Abstract We describe in this paper a novel application of neural networks to system health monitoring of a large antenna for deep space communications. The paper outlines our approach to building a monitoring system using hybrid signal processing and neural network techniques, including autoregressive modelling, pattern recognition, and Hidden Markov models. We discuss several problems which are somewhat generic in applications of this kind - in particular we address the problem of detecting classes which were not present in the training data. Experimental results indicate that the proposed system is sufficiently reliable for practical implementation. 1 Background: The Deep Space Network The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL)for the National Aeronautics and Space Administration (NASA)) is unique in terms of ...


Adaptive Synchronization of Neural and Physical Oscillators

Neural Information Processing Systems

Animal locomotion patterns are controlled by recurrent neural networks called central pattern generators (CPGs). Although a CPG can oscillate autonomously, its rhythm and phase must be well coordinated with the state of the physical system using sensory inputs. In this paper we propose a learning algorithm for synchronizing neural and physical oscillators with specific phase relationships. Sensory input connections are modified by the and input signals. Simulations showcorrelation between cellular activities that the learning rule can be used for setting sensory feedback connections to a CPG as well as coupling connections between CPGs. 1 CENTRAL AND SENSORY MECHANISMS IN LOCOMOTION CONTROL Patterns of animal locomotion, such as walking, swimming, and fiying, are generated by recurrent neural networks that are located in segmental ganglia of invertebrates and spinal cords of vertebrates (Barnes and Gladden, 1985).


Rule Induction through Integrated Symbolic and Subsymbolic Processing

Neural Information Processing Systems

We describe a neural network, called RufeNet, that learns explicit, symbolic condition-action rules in a formal string manipulation domain. of the domain,RuleNet discovers functional categories over elements and, at various points during learning, extracts rules that operate on these categories. The rules are then injected back into RuleNet and in a process called iterative projection. By incorporatingtraining continues, rules in this way, RuleNet exhibits enhanced learning and generalization performance over alternative neural net approaches. By integrating symbolic rule learning and subsymbolic category learning, RuleNet has capabilities that go beyond a purely symbolic system. We show how this architecture can be applied to the problem of case-role assignment in natural language processing, yielding a novel rule-based solution.


A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands to Control a Kinematic Arm

Neural Information Processing Systems

A neurophysiologically-based model is presented that controls a simulated kinematic arm during goal-directed reaches. The network generates a quasi-feedforward motor command that is learned using training signals generated by corrective movements. For each target, the network selects and sets the output of a subset of pattern generators. During the movement, feedbackfrom proprioceptors turns off the pattern generators. The task facing individual pattern generators is to recognize when the arm reaches the target and to turn off. A distributed representation of the motor commandthat resembles population vectors seen in vivo was produced naturally by these simulations.


Polynomial Uniform Convergence of Relative Frequencies to Probabilities

Neural Information Processing Systems

We define the concept of polynomial uniform convergence of relative frequencies to probabilities in the distribution-dependent context.