Plotting

Computing with Arrays of Bell-Shaped and Sigmoid Functions

Neural Information Processing Systems

Bell-shaped response curves are commonly found in biological neurons whenever a natural metric exist on the corresponding relevant stimulus variable (orientation, position in space, frequency, time delay, ...). As a result, they are often used in neural models in different context ranging from resolution enhancement and interpolation tolearning (see, for instance, Baldi et al. (1988), Moody et al. (1989) *and Division of Biology, California Institute of Technology. The complete title of this paper should read: "Computing with arrays of bell-shaped and sigmoid functions.


Adaptive Range Coding

Neural Information Processing Systems

Determination of nearly optimalt or at least adequatet regions is left as an additional task that would require that the system dynamics be analyzedt which is not always possible. To address this problemt we move region boundaries adaptively t progressively altering the initial partitioning to a more appropriate representation with no need for a priori knowledge. Unlike previous work (Michiet 1968)t (Bartot 1983)t (Andersont 1982) which used fixed this approach produces adaptivecoderSt coders that contract and expand regions/ranges. During adaptationt frequently active regions/ranges contractt reducing the number of situations in which they will be activated, and increasing the chances that neighboring regions will receive input instead. This class of self-organization is discussed in Kohonen (Kohonent 1984)t (Rittert 1986t 1988).


Remarks on Interpolation and Recognition Using Neural Nets

Neural Information Processing Systems

We consider different types of single-hidden-Iayer feedforward nets: with or without direct input to output connections, and using either threshold orsigmoidal activation functions. The main results show that direct connections in threshold nets double the recognition but not the interpolation power,while using sigmoids rather than thresholds allows (at least) doubling both. Various results are also given on VC dimension and other measures of recognition capabilities.


Spherical Units as Dynamic Consequential Regions: Implications for Attention, Competition and Categorization

Neural Information Processing Systems

Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model (Gluck & Bower 1988): (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning.


A Reinforcement Learning Variant for Control Scheduling

Neural Information Processing Systems

However, a large class of continuous control problems require maintaining the system at a desired operating point, or setpoint, at a given time. We refer to this problem as the basic setpoint control problem [Guha 90], and have shown that reinforcement learning can be used, not surprisingly, quite well for such control tasks. A more general version of the same problem requires steering the system from some 479 480 Guha initial or starting state to a desired state or setpoint at specific times without knowledge of the dynamics of the system. We therefore wish to examine how control scheduling tasks, where the system must be steered through a sequence of setpoints at specific times.


Generalization Dynamics in LMS Trained Linear Networks

Neural Information Processing Systems

Recent progress in network design demonstrates that nonlinear feedforward neural networkscan perform impressive pattern classification for a variety of real-world applications (e.g., Le Cun et al., 1990; Waibel et al., 1989). Various simulations and relationships between the neural network and machine learning theoretical literatures alsosuggest that too large a number of free parameters ("weight overfitting") could substantially reduce generalization performance.


Learning Time-varying Concepts

Neural Information Processing Systems

This work extends computational learning theory to situations in which concepts vary over time, e.g., system identification of a time-varying plant. We have extended formal definitions of concepts and learning to provide a framework in which an algorithm can track a concept as it evolves over time. Given this framework and focusing on memory-based algorithms, we have derived some PACstyle sample complexity results that determine, for example, when tracking is feasible. We have also used a similar framework and focused on incremental tracking algorithms for which we have derived some bounds on the mistake or error rates for some specific concept classes. 1 INTRODUCTION The goal of our ongoing research is to extend computational learning theory to include concepts that can change or evolve over time. For example, face recognition is complicated bythe fact that a persons face changes slowly with age and more quickly with changes in make up, hairstyle, or facial hair.




Compact EEPROM-based Weight Functions

Neural Information Processing Systems

The recent surge of interest in neural networks and parallel analog computation has motivated the need for compact analog computing blocks. Analog weighting is an important computational function of this class. Analog weighting is the combining of two analog values, one of which is typically varying (the input) and one of which is typically fixed (the weight) or at least varying more slowly. The varying value "weighted" by the fixed value through the "weighting function", typically multiplication.is Analog weighting is most interesting when the overall computational task involves computing the "weighted sum of the inputs."