Goto

Collaborating Authors

 Country


Scaling and Generalization in Neural Networks: A Case Study

Neural Information Processing Systems

The issues of scaling and generalization have emerged as key issues in current studies of supervised learning from examples in neural networks. Questions such as how many training patterns and training cycles are needed for a problem of a given size and difficulty, how to represent the inllUh and how to choose useful training exemplars, are of considerable theoretical and practical importance. Several intuitive rules of thumb have been obtained from empirical studies, but as yet there are few rigorous results. In this paper we summarize a study Qf generalization in the simplest possible case-perceptron networks learning linearly separable functions. The task chosen was the majority function (i.e. return a 1 if a majority of the input units are on), a predicate with a number of useful properties. We find that many aspects of.generalization in multilayer networks learning large, difficult tasks are reproduced in this simple domain, in which concrete numerical results and even some analytic understanding can be achieved.


Modeling the Olfactory Bulb - Coupled Nonlinear Oscillators

Neural Information Processing Systems

A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations produce a 35-60 Hz modulated activity coherent across the bulb, mimicing the observed field potentials. The decision states (for the odor information) here can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis and simulations show that a group of coupled nonlinear oscillators are responsible for the oscillatory activities determined by the odor input, and that the bulb, with appropriate inputs from higher centers, can enhance or suppress the sensitivity to partiCUlar odors. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex.


Cricket Wind Detection

Neural Information Processing Systems

A great deal of interest has recently been focused on theories concerning parallel distributed processing in central nervous systems. In particular, many researchers have become very interested in the structure and function of "computational maps" in sensory systems. As defined in a recent review (Knudsen et al, 1987), a "map" is an array of nerve cells, within which there is a systematic variation in the "tuning" of neighboring cells for a particular parameter. For example, the projection from retina to visual cortex is a relatively simple topographic map; each cortical hypercolumn itself contains a more complex "computational" map of preferred line orientation representing the angle of tilt of a simple line stimulus. The overall goal of the research in my lab is to determine how a relatively complex mapped sensory system extracts and encodes information from external stimuli.


Learning Sequential Structure in Simple Recurrent Networks

Neural Information Processing Systems

This tendency to preserve information about the path is not a characteristic of traditional finite-state automata. ENCODING PATH INFORMATION In a different set of experiments, we asked whether the SRN could learn to use the infonnation about the path that is encoded in the hidden units' patterns of activation. In one of these experiments, we tested whether the network could master length constraints. When strings generated from the small finite-state grammar may only have a maximum of 8 letters, the prediction following the presentation of the same letter in position number six or seven may be different. For example, following the sequence'TSSSXXV', 'V' is the seventh letter and only another'V' would be a legal successor.


Spreading Activation over Distributed Microfeatures

Neural Information Processing Systems

One attยทempt at explaining human inferencing is that of spreading activat,ion, particularly in the st.ructured connectionist paradigm. This has resulted in t.he building of systems with semantically nameable nodes which perform inferencing by examining t.he pat,t.erns of activation spread.


Neural Architecture

Neural Information Processing Systems

Valentino Braitenberg Max Planck Institute Federal Republic of Germany While we are waiting for the ultimate biophysics of cell membranes and synapses to be completed, we may speculate on the shapes of neurons and on the patterns of their connections. Much of this will be significant whatever the outcome of future physiology. Take as an example the isotropy, anisotropy and periodicity of different kinds of neural networks. The very existence of these different types in different parts of the brain (or in different brains) defeats explanation in terms of embryology; the mechanisms of development are able to make one kind of network or another. The reasons for the difference must be in the functions they perform.


A Computationally Robust Anatomical Model for Retinal Directional Selectivity

Neural Information Processing Systems

We analyze a mathematical model for retinal directionally selective cells based on recent electrophysiological data, and show that its computation of motion direction is robust against noise and speed.


Further Explorations in Visually-Guided Reaching: Making MURPHY Smarter

Neural Information Processing Systems

Visual guidance of a multi-link arm through a cluttered workspace is known to be an extremely difficult computational problem. Classical approaches in the field of robotics have typically broken the problem into pieces of manageable size, including modules for direct and inverse kinematics and dynamics [7], along with a variety of highly complex algorithms for motion planning in the configuration space of a multi-link arm (e.g.