Neural Information Processing Systems
Digital Realisation of Self-Organising Maps
Allinson, Nigel M., Johnson, Martin J., Moon, Kevin J.
Background The overall aim of our work is to develop fast and flexible systems for image recognition, usually for commercial inspection tasks. There is an urgent need for automatic learning systems in such applications, since at present most systems employ heuristic classification techniques. This approach requires an extensive development effort for each new application, which exaggerates implementation costs; and for many tasks, there are no clearly defined features which can be employed for classification. Enquiring of a human expert will often only produce "good" and "bad" examples of each class and not the underlying strategies which he may employ. Our approach is to model in a quite abstract way the perceptual networks found in the mammalian brain for vision. A back-propagation network could be employed to generalise about the input pattern space, and it would find some useful representations. However, there are many difficulties with this approach, since the network structure assumes nothing about the input space and it can be difficult to bound complicated feature clusters using hyperplanes. The mammalian brain is a layered structure, and so another model may be proposed which involves the application of many two-dimensional feature maps. Each map takes information from the output of the preceding one and performs some type of clustering analysis in order to reduce the dimensionality of the input information.
An Optimality Principle for Unsupervised Learning
We propose an optimality principle for training an unsupervised feedforward neural network based upon maximal ability to reconstruct the input data from the network outputs. We describe an algorithm which can be used to train either linear or nonlinear networks with certain types of nonlinearity. Examples of applications to the problems of image coding, feature detection, and analysis of randomdot stereograms are presented.
Associative Learning via Inhibitory Search
ALVIS is a reinforcement-based connectionist architecture that learns associative maps in continuous multidimensional environments. The discovered locations of positive and negative reinforcements are recorded in "do be" and "don't be" subnetworks, respectively. The outputs of the subnetworks relevant to the current goal are combined and compared with the current location to produce an error vector. This vector is backpropagated through a motor-perceptual mapping network.
Neural Architecture
While we are waiting for the ultimate biophysics of cell membranes and synapses to be completed, we may speculate on the shapes of neurons and on the patterns of their connections. Much of this will be significant whatever the outcome of future physiology. Take as an example the isotropy, anisotropy and periodicity of different kinds of neural networks. The very existence of these different types in different parts of the brain (or in different brains) defeats explanation in terms of embryology; the mechanisms of development are able to make one kind of network or another. The reasons for the difference must be in the functions they perform.
Neural Control of Sensory Acquisition: The Vestibulo-Ocular Reflex
Paulin, Michael G., Nelson, Mark E., Bower, James M.
We present a new hypothesis that the cerebellum plays a key role in actively controlling the acquisition of sensory infonnation by the nervous system. In this paper we explore this idea by examining the function of a simple cerebellar-related behavior, the vestibula-ocular reflex or VOR, in which eye movements are generated to minimize image slip on the retina during rapid head movements. Considering this system from the point of view of statistical estimation theory, our results suggest that the transfer function of the VOR, often regarded as a static or slowly modifiable feature of the system, should actually be continuously and rapidly changed during head movements. We further suggest that these changes are under the direct control of the cerebellar cortex and propose experiments to test this hypothesis.
Neural Approach for TV Image Compression Using a Hopfield Type Network
Naillon, Martine, Theeten, Jean-Bernard
ABSTRACT A self-organizing Hopfield network has been developed in the context of Vector Ouantiza -tion, aiming at compression of television images. The metastable states of the spin glass-like network are used as an extra storage resource using the Minimal Overlap learning rule (Krauth and Mezard 1987) to optimize the organization of the attractors. The sel f-organi zi ng scheme that we have devised results in the generation of an adaptive codebook for any qiven TV image. As in many applications they are unknown, the aim of this work is to develop a network capable to learn how to select its attractors. TV image compression using Vector Quantization (V.Q.)(Gray, 1984), a key issue for HOTV transmission, is a typical case, since the non neural algorithms which generate the list of codes (the codebookl are suboptimal.
Using Backpropagation with Temporal Windows to Learn the Dynamics of the CMU Direct-Drive Arm II
Goldberg, Kenneth Y., Pearlmutter, Barak A.
K. Y. Goldberg and B. A. Pearlmutter School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT Computing the inverse dynamics of a robot ann is an active area of research in the control literature. We hope to learn the inverse dynamics by training a neural network on the measured response of a physical ann. The input to the network is a temporal window of measured positions; output is a vector of torques. We train the network on data measured from the first two joints of the CMU Direct-Drive Arm II as it moves through a randomly-generated sample of "pick-and-place" trajectories. We then test generalization with a new trajectory and compare its output with the torque measured at the physical arm.