Country
Benchmarking Feed-Forward Neural Networks: Models and Measures
Existing metrics for the learning performance of feed-forward neural networks do not provide a satisfactory basis for comparison because the choice of the training epoch limit can determine the results of the comparison. I propose new metrics which have the desirable property of being independent of the training epoch limit. The efficiency measures the yield of correct networks in proportion to the training effort expended. The optimal epoch limit provides the greatest efficiency. The learning performance is modelled statistically, and asymptotic performance is estimated. Implementation details may be found in (Harney, 1992).
Fault Diagnosis of Antenna Pointing Systems using Hybrid Neural Network and Signal Processing Models
Smyth, Padhraic, Mellstrom, Jeff
Padhraic Smyth, J eft" Mellstrom Jet Propulsion Laboratory 238-420 California Institute of Technology Pasadena, CA 91109 Abstract We describe in this paper a novel application of neural networks to system health monitoring of a large antenna for deep space communications. The paper outlines our approach to building a monitoring system using hybrid signal processing and neural network techniques, including autoregressive modelling, pattern recognition, and Hidden Markov models. We discuss several problems which are somewhat generic in applications of this kind - in particular we address the problem of detecting classes which were not present in the training data. Experimental results indicate that the proposed system is sufficiently reliable for practical implementation. 1 Background: The Deep Space Network The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL)for the National Aeronautics and Space Administration (NASA)) is unique in terms of ...
Adaptive Synchronization of Neural and Physical Oscillators
Animal locomotion patterns are controlled by recurrent neural networks called central pattern generators (CPGs). Although a CPG can oscillate autonomously, its rhythm and phase must be well coordinated with the state of the physical system using sensory inputs. In this paper we propose a learning algorithm for synchronizing neural and physical oscillators with specific phase relationships. Sensory input connections are modified by the and input signals. Simulations showcorrelation between cellular activities that the learning rule can be used for setting sensory feedback connections to a CPG as well as coupling connections between CPGs. 1 CENTRAL AND SENSORY MECHANISMS IN LOCOMOTION CONTROL Patterns of animal locomotion, such as walking, swimming, and fiying, are generated by recurrent neural networks that are located in segmental ganglia of invertebrates and spinal cords of vertebrates (Barnes and Gladden, 1985).
Rule Induction through Integrated Symbolic and Subsymbolic Processing
McMillan, Clayton, Mozer, Michael C., Smolensky, Paul
We describe a neural network, called RufeNet, that learns explicit, symbolic condition-action rules in a formal string manipulation domain. of the domain,RuleNet discovers functional categories over elements and, at various points during learning, extracts rules that operate on these categories. The rules are then injected back into RuleNet and in a process called iterative projection. By incorporatingtraining continues, rules in this way, RuleNet exhibits enhanced learning and generalization performance over alternative neural net approaches. By integrating symbolic rule learning and subsymbolic category learning, RuleNet has capabilities that go beyond a purely symbolic system. We show how this architecture can be applied to the problem of case-role assignment in natural language processing, yielding a novel rule-based solution.
A Cortico-Cerebellar Model that Learns to Generate Distributed Motor Commands to Control a Kinematic Arm
Berthier, N. E., Singh, S. P., Barto, A. G., Houk, J. C.
A neurophysiologically-based model is presented that controls a simulated kinematic arm during goal-directed reaches. The network generates a quasi-feedforward motor command that is learned using training signals generated by corrective movements. For each target, the network selects and sets the output of a subset of pattern generators. During the movement, feedbackfrom proprioceptors turns off the pattern generators. The task facing individual pattern generators is to recognize when the arm reaches the target and to turn off. A distributed representation of the motor commandthat resembles population vectors seen in vivo was produced naturally by these simulations.
Propagation Filters in PDS Networks for Sequencing and Ambiguity Resolution
Sumida, Ronald A., Dyer, Michael G.
We present a Parallel Distributed Semantic (PDS) Network architecture that addresses the problems of sequencing and ambiguity resolution in natural language understanding. A PDS Network stores phrases and their meanings using multiple PDP networks, structured in the form of a semantic net.A mechanism called Propagation Filters is employed: (1) to control communication between networks, (2) to properly sequence the components of a phrase, and (3) to resolve ambiguities. Simulation results indicate that PDS Networks and Propagation Filters can successfully represent high-levelknowledge, can be trained relatively quickly, and provide for parallel inferencing at the knowledge level. 1 INTRODUCTION Backpropagation has shown considerable potential for addressing problems in natural languageprocessing (NLP). However, the traditional PDP [Rumelhart and McClelland, 1986] approach of using one (or a small number) of backprop networks for NLP has been plagued by a number of problems: (1) it has been largely unsuccessful atrepresenting high-level knowledge, (2) the networks are slow to train, and (3) they are sequential at the knowledge level. A solution to these problems is to represent high-level knowledge structures over a large number of smaller PDP net-233 234 Sumida and Dyer works.