Technology
Learning Classification with Unlabeled Data
We represent objects with n-dimensional pattern vectors and consider piecewise-linear classifiers consisting of a collection of (labeled) codebook vectors in the space of the input patterns (See Figure 1). The classification boundaries are gi ven by the voronoi tessellation of the codebook vectors. Patterns are said to belong to the class (given by the label) of the codebook vector to which they are closest.
Grammatical Inference by Attentional Control of Synchronization in an Oscillating Elman Network
Baird, Bill, Troyer, Todd, Eeckman, Frank
We show how an "Elman" network architecture, constructed from recurrently connected oscillatory associative memory network modules, can employ selective "attentional" control of synchronization to direct the flow of communication and computation within the architecture to solve a grammatical inference problem. Previously we have shown how the discrete time "Elman" network algorithm can be implemented in a network completely described by continuous ordinary differential equations. The time steps (machine cycles) of the system are implemented by rhythmic variation (clocking) of a bifurcation parameter. In this architecture, oscillation amplitude codes the information content or activity of a module (unit), whereas phase and frequency are used to "softwire" the network. Only synchronized modules communicate by exchanging amplitude information; the activity of non-resonating modules contributes incoherent crosstalk noise. Attentional control is modeled as a special subset of the hidden modules with ouputs which affect the resonant frequencies of other hidden modules. They control synchrony among the other modules and direct the flow of computation (attention) to effect transitions between two subgraphs of a thirteen state automaton which the system emulates to generate a Reber grammar. The internal crosstalk noise is used to drive the required random transitions of the automaton.
Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples
The Singular Value Decomposition (SVD) is an important tool for linear algebra and can be used to invert or approximate matrices. Although many authors use "SVD" synonymously with "Eigenvector Decomposition" or "Principal Components Transform", it is important to realize that these other methods apply only to symmetric matrices, while the SVD can be applied to arbitrary nonsquare matrices. This property is important for applications to signal transmission and control. I propose two new algorithms for iterative computation of the SVD given only sample inputs and outputs from a matrix. Although there currently exist many algorithms for Eigenvector Decomposition (Sanger 1989, for example), these are the first true samplebased SVD algorithms. 1 INTRODUCTION The Singular Value Decomposition (SVD) is a method for writing an arbitrary nons quare matrix as the product of two orthogonal matrices and a diagonal matrix.
Assessing the Quality of Learned Local Models
Schaal, Stefan, Atkeson, Christopher G.
An approach is presented to learning high dimensional functions in the case where the learning algorithm can affect the generation of new data. A local modeling algorithm, locally weighted regression, is used to represent the learned function. Architectural parameters of the approach, such as distance metrics, are also localized and become a function of the query point instead of being global. Statistical tests are given for when a local model is good enough and sampling should be moved to a new area. Our methods explicitly deal with the case where prediction accuracy requirements exist during exploration: By gradually shifting a "center of exploration" and controlling the speed of the shift with local prediction accuracy, a goal-directed exploration of state space takes place along the fringes of the current data support until the task goal is achieved.
Learning Complex Boolean Functions: Algorithms and Applications
Oliveira, Arlindo L., Sangiovanni-Vincentelli, Alberto
The most commonly used neural network models are not well suited to direct digital implementations because each node needs to perform a large number of operations between floating point values. Fortunately, the ability to learn from examples and to generalize is not restricted to networks ofthis type. Indeed, networks where each node implements a simple Boolean function (Boolean networks) can be designed in such a way as to exhibit similar properties. Two algorithms that generate Boolean networks from examples are presented. The results show that these algorithms generalize very well in a class of problems that accept compact Boolean network descriptions. The techniques described are general and can be applied to tasks that are not known to have that characteristic. Two examples of applications are presented: image reconstruction and handwritten character recognition.
Development of Orientation and Ocular Dominance Columns in Infant Macaques
Obermayer, Klaus, Kiorpes, Lynne, Blasdel, Gary G.
Maps of orientation preference and ocular dominance were recorded optically from the cortices of 5 infant macaque monkeys, ranging in age from 3.5 to 14 weeks. In agreement with previous observations, we found that basic features of orientation and ocular dominance maps, as well as correlations between them, are present and robust by 3.5 weeks of age. We did observe changes in the strength of ocular dominance signals, as well as in the spacing of ocular dominance bands,both of which increased steadily between 3.5 and 14 weeks of age. The latter finding suggests that the adult spacing of ocular dominance bands depends on cortical growth in neonatal animals. Since we found no corresponding increase in the spacing of orientation preferences, however, there is a possibility that the orientation preferences of some cells change as the cortical surface expands. Since correlations between the patterns of orientation selectivity and ocular dominance are present at an age, when the visual system is still immature, it seems more likely that their development maybe an innate process and may not require extensive visual experience.
A Network Mechanism for the Determination of Shape-From-Texture
We propose a computational model for how the cortex discriminates shape and depth from texture. The model consists of four stages: (1) extraction of local spatial frequency, (2) frequency characterization, (3) detection of texture compression by normalization, and (4) integration of the normalized frequency over space. The model accounts for a number of psychophysical observations including experiments based on novel random textures. These textures are generated from white noise and manipulated in Fourier domain in order to produce specific frequency spectra. Simulations with a range of stimuli, including real images, show qualitative and quantitative agreement with human perception. 1 INTRODUCTION There are several physical cues to shape and depth which arise from changes in projection as a surface curves away from view, or recedes in perspective.
Implementing Intelligence on Silicon Using Neuron-Like Functional MOS Transistors
Shibata, Tadashi, Kotani, Koji, Yamashita, Takeo, Ishii, Hiroshi, Kosaka, Hideo, Ohmi, Tadahiro
We will present the implementation of intelligent electronic circuits realized for the first time using a new functional device called Neuron MOS Transistor (neuMOS or vMOS in short) simulating the behavior of biological neurons at a single transistor level. Search for the most resembling data in the memory cell array, for instance, can be automatically carried out on hardware without any software manipulation. Soft Hardware, which we named, can arbitrarily change its logic function in real time by external control signals without any hardware modification. Implementation of a neural network equipped with an on-chip self-learning capability is also described. Through the studies of vMOS intelligent circuit implementation, we noticed an interesting similarity in the architectures of vMOS logic circuitry and biological systems.