Goto

Collaborating Authors

 Country


Constant-Time Loading of Shallow 1-Dimensional Networks

Neural Information Processing Systems

The complexity of learning in shallow I-Dimensional neural networks has been shown elsewhere to be linear in the size of the network. However, when the network has a huge number of units (as cortex has) even linear time might be unacceptable. Furthermore, the algorithm that was given to achieve this time was based on a single serial processor and was biologically implausible. In this work we consider the more natural parallel model of processing and demonstrate an expected-time complexity that is constant (i.e.


Modeling Applications with the Focused Gamma Net

Neural Information Processing Systems

The focused gamma network is proposed as one of the possible implementations of the gamma neural model. The focused gamma network is compared with the focused backpropagation network and TDNN for a time series prediction problem, and with ADALINE in a system identification problem.


Human and Machine 'Quick Modeling'

Neural Information Processing Systems

We present here an interesting experiment in'quick modeling' by humans, performed independently on small samples, in several languages and two continents, over the last three years. Comparisons to decision tree procedures and neural net processing are given. From these, we conjecture that human reasoning is better represented by the latter, but substantially different from both. Implications for the'strong convergence hypothesis' between neural networks and machine learning are discussed, now expanded to include human reasoning comparisons. 1 INTRODUCTION Until recently the fields of symbolic and connectionist learning evolved separately. Suddenly in the last two years a significant number of papers comparing the two methodologies have appeared. A beginning synthesis of these two fields was forged at the NIPS '90 Workshop #5 last year (Pratt and Norton, 1990), where one may find a good bibliography of the recent work of Atlas, Dietterich, Omohundro, Sanger, Shavlik, Tsoi, Utgoff and others. It was at that NIPS '90 Workshop that we learned of these studies, most of which concentrate on performance comparisons of decision tree algorithms (such as ID3, CART) and neural net algorithms (such as Perceptrons, Backpropagation). Independently three years ago we had looked at Quinlan's ID3 scheme (Quinlan, 1984) and intuitively and rather instantly not agreeing with the generalization he obtains by ID3 from a sample of 8 items generalized to 12 items, we subjected this example to a variety of human experiments. We report our findings, as compared to the performance of ID3 and also to various neural net computations.


Statistical Reliability of a Blowfly Movement-Sensitive Neuron

Neural Information Processing Systems

We develop a model-independent method for characterizing the reliability of neural responses to brief stimuli. This approach allows us to measure the discriminability of similar stimuli, based on the real-time response of a single neuron. Neurophysiological data were obtained from a movementsensitive neuron (HI) in the visual system of the blowfly Calliphom erythrocephala. Furthermore, recordings were made from blowfly photoreceptor cells to quantify the signal to noise ratios in the peripheral visual system. As photoreceptors form the input to the visual system, the reliability of their signals ultimately determines the reliability of any visual discrimination task. For the case of movement detection, this limit can be computed, and compared to the HI neuron's reliability. Under favorable conditions, the performance of the HI neuron closely approaches the theoretical limit, which means that under these conditions the nervous system adds little noise in the process of computing movement from the correlations of signals in the photoreceptor array.


Propagation Filters in PDS Networks for Sequencing and Ambiguity Resolution

Neural Information Processing Systems

We present a Parallel Distributed Semantic (PDS) Network architecture that addresses the problems of sequencing and ambiguity resolution in natural language understanding. A PDS Network stores phrases and their meanings using multiple PDP networks, structured in the form of a semantic net. A mechanism called Propagation Filters is employed: (1) to control communication between networks, (2) to properly sequence the components of a phrase, and (3) to resolve ambiguities. Simulation results indicate that PDS Networks and Propagation Filters can successfully represent high-level knowledge, can be trained relatively quickly, and provide for parallel inferencing at the knowledge level. 1 INTRODUCTION Backpropagation has shown considerable potential for addressing problems in natural language processing (NLP). However, the traditional PDP [Rumelhart and McClelland, 1986] approach of using one (or a small number) of backprop networks for NLP has been plagued by a number of problems: (1) it has been largely unsuccessful at representing high-level knowledge, (2) the networks are slow to train, and (3) they are sequential at the knowledge level.


Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model

Neural Information Processing Systems

This work discusses various optimization techniques which were proposed in models for controlling arm movements. In particular, the minimum-muscle-tension-change model is investigated. A dynamic simulator of the monkey's arm, including seventeen single and double joint muscles, is utilized to generate horizontal hand movements. The hand trajectories produced by this algorithm are discussed.


Fast, Robust Adaptive Control by Learning only Forward Models

Neural Information Processing Systems

A large class of motor control tasks requires that on each cycle the controller is told its current state and must choose an action to achieve a specified, state-dependent, goal behaviour. This paper argues that the optimization of learning rate, the number of experimental control decisions before adequate performance is obtained, and robustness is of prime importance-if necessary at the expense of computation per control cycle and memory requirement. This is motivated by the observation that a robot which requires two thousand learning steps to achieve adequate performance, or a robot which occasionally gets stuck while learning, will always be undesirable, whereas moderate computational expense can be accommodated by increasingly powerful computer hardware. It is not unreasonable to assume the existence of inexpensive 100 Mflop controllers within a few years and so even processes with control cycles in the low tens of milliseconds will have millions of machine instructions in which to make their decisions. This paper outlines a learning control scheme which aims to make effective use of such computational power. 1 MEMORY BASED LEARNING Memory-based learning is an approach applicable to both classification and function learning in which all experiences presented to the learning box are explicitly remembered. The memory, Mem, is a set of input-output pairs, Mem {(Xl, YI), (X21 Y2),..., (Xb Yk)}.


Markov Random Fields Can Bridge Levels of Abstraction

Neural Information Processing Systems

Network vision systems must make inferences from evidential information across levels of representational abstraction, from low level invariants, through intermediate scene segments, to high level behaviorally relevant object descriptions. This paper shows that such networks can be realized as Markov Random Fields (MRFs). We show first how to construct an MRF functionally equivalent to a Hough transform parameter network, thus establishing a principled probabilistic basis for visual networks. Second, we show that these MRF parameter networks are more capable and flexible than traditional methods. In particular, they have a well-defined probabilistic interpretation, intrinsically incorporate feedback, and offer richer representations and decision capabilities.


Unsupervised learning of distributions on binary vectors using two layer networks

Neural Information Processing Systems

We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors. We analyze the class of probability distributions that can be modeled by such machines.


Learning Unambiguous Reduced Sequence Descriptions

Neural Information Processing Systems

Do you want your neural net algorithm to learn sequences? Do not limit yourself to conventional gradient descent (or approximations thereof). Instead, use your sequence learning algorithm (any will do) to implement the following method for history compression. No matter what your final goals are, train a network to predict its next input from the previous ones. Since only unpredictable inputs convey new information, ignore all predictable inputs but let all unexpected inputs (plus information about the time step at which they occurred) become inputs to a higher-level network of the same kind (working on a slower, self-adjusting time scale). Go on building a hierarchy of such networks.