Not enough data to create a plot.
Try a different view from the menu above.
Spherical Units as Dynamic Consequential Regions: Implications for Attention, Competition and Categorization
Hanson, Stephen Jose, Gluck, Mark A.
Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model (Gluck & Bower 1988): (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning.
A Reinforcement Learning Variant for Control Scheduling
However, a large class of continuous control problems require maintaining the system at a desired operating point, or setpoint, at a given time. We refer to this problem as the basic setpoint control problem [Guha 90], and have shown that reinforcement learning can be used, not surprisingly, quite well for such control tasks. A more general version of the same problem requires steering the system from some 479 480 Guha initial or starting state to a desired state or setpoint at specific times without knowledge of the dynamics of the system. We therefore wish to examine how control scheduling tasks, where the system must be steered through a sequence of setpoints at specific times.
Generalization Dynamics in LMS Trained Linear Networks
Recent progress in network design demonstrates that nonlinear feedforward neural networkscan perform impressive pattern classification for a variety of real-world applications (e.g., Le Cun et al., 1990; Waibel et al., 1989). Various simulations and relationships between the neural network and machine learning theoretical literatures alsosuggest that too large a number of free parameters ("weight overfitting") could substantially reduce generalization performance.
Learning Time-varying Concepts
Kuh, Anthony, Petsche, Thomas, Rivest, Ronald L.
This work extends computational learning theory to situations in which concepts vary over time, e.g., system identification of a time-varying plant. We have extended formal definitions of concepts and learning to provide a framework in which an algorithm can track a concept as it evolves over time. Given this framework and focusing on memory-based algorithms, we have derived some PACstyle sample complexity results that determine, for example, when tracking is feasible. We have also used a similar framework and focused on incremental tracking algorithms for which we have derived some bounds on the mistake or error rates for some specific concept classes. 1 INTRODUCTION The goal of our ongoing research is to extend computational learning theory to include concepts that can change or evolve over time. For example, face recognition is complicated bythe fact that a persons face changes slowly with age and more quickly with changes in make up, hairstyle, or facial hair.
Compact EEPROM-based Weight Functions
Kramer, A., Sin, C. K., Chu, R., Ko, P. K.
The recent surge of interest in neural networks and parallel analog computation has motivated the need for compact analog computing blocks. Analog weighting is an important computational function of this class. Analog weighting is the combining of two analog values, one of which is typically varying (the input) and one of which is typically fixed (the weight) or at least varying more slowly. The varying value "weighted" by the fixed value through the "weighting function", typically multiplication.is Analog weighting is most interesting when the overall computational task involves computing the "weighted sum of the inputs."
AAAI 1991 Spring Symposium Series Reports
The Association for the Advancement of Artificial Intelligence held its 1991 Spring Symposium Series on March 26-28 at Stanford University, Stanford, California. This article contains short summaries of the eight symposia that were conducted: Argumentation and Belief, Composite System Design, Connectionist Natural Language Processing, Constraint-Based Reasoning, Implemented Knowledge Representation and Reasoning Systems, Integrated Intelligent Architectures, Logical Formalizations of Commonsense Reasoning, and Machine Learning of Natural Language and Ontology.