EMPATH: Face, Emotion, and Gender Recognition Using Holons

Neural Information Processing Systems

The network is trained to simply reproduce its input, and so can as a nonlinear version of Kohonen's (1977) auto-associator. However it must through a narrow channel of hidden units, so it must extract regularities from the during learning. Empirical analysis of the trained network showed that the span the principal subspace of the image vectors, with some noise on the component due to network nonlinearity (Cottrell & Munro, 1988).



Real-time autonomous robot navigation using VLSI neural networks

Neural Information Processing Systems

There have been very few demonstrations ofthe application ofVLSI neural networks to real world problems. Yet there are many signal processing, pattern recognition or optimization problems where a large number of competing hypotheses need to be explored in parallel, most often in real time. The massive parallelism of VLSI neural network devices, with one multiplier circuit per synapse, is ideally suited to such problems. In this paper, we present preliminary results from our design for a real time robot navigation system based on VLSI neural network modules. This is a - Also: RSRE, Great Malvern, Worcester, WR14 3PS 422 Real-time Autonomous Robot Navigation Using VLSI Neural Networks 423 real world problem which has not been fully solved by traditional AI methods; even when partial solutions have been proposed and implemented, these have required vast computational resources, usually remote from the robot and linked to it via an umbilical cord. 2 OVERVIEW The aim of our work is to develop an autonomous vehicle capable of real-time navigation, including obstacle avoidance, in a known indoor environment.


Integrated Segmentation and Recognition of Hand-Printed Numerals

Neural Information Processing Systems

Neural network algorithms have proven useful for recognition of individual, segmentedcharacters. However, their recognition accuracy has been limited by the accuracy of the underlying segmentation algorithm. Conventional, rule-basedsegmentation algorithms encounter difficulty if the characters are touching, broken, or noisy. The problem in these situations is that often one cannot properly segment a character until it is recognized yetone cannot properly recognize a character until it is segmented. We present here a neural network algorithm that simultaneously segments and recognizes in an integrated system. This algorithm has several novel features: it uses a supervised learning algorithm (backpropagation), but is able to take position-independent information as targets and self-organize the activities of the units in a competitive fashion to infer the positional information. We demonstrate this ability with overlapping hand-printed numerals.


Convergence of a Neural Network Classifier

Neural Information Processing Systems

In this paper, we prove that the vectors in the LVQ learning algorithm converge. We do this by showing that the learning algorithm performs stochastic approximation. Convergence is then obtained by identifying the appropriate conditions on the learning rate and on the underlying statistics of the classification problem. We also present a modification to the learning algorithm which we argue results in convergence of the LVQ error to the Bayesian optimal error as the appropriate parameters become large.


Statistical Mechanics of Temporal Association in Neural Networks

Neural Information Processing Systems

Basic computational functions of associative neural structures may be analytically studied within the framework of attractor neural networks where static patterns are stored as stable fixed-points for the system's dynamics. If the interactions between single neurons are instantaneous and mediated by symmetric couplings, there is a Lyapunov function for the retrieval dynamics (Hopfield 1982). The global computation correspondsin that case to a downhill motion in an energy landscape created by the stored information. Methods of equilibrium statistical mechanics may be applied andpermit a quantitative analysis of the asymptotic network behavior (Amit et al. 1985, 1987). The existence of a Lyapunov function is thus of great conceptual aswell as technical importance. Nevertheless, one should be aware that environmental inputs to a neural net always provide information in both space and time. It is therefore desirable to extend the original Hopfield scheme and to explore possibilities for a joint representation of static patterns and temporal associations.


Self-organization of Hebbian Synapses in Hippocampal Neurons

Neural Information Processing Systems

We are exploring the significance of biological complexity for neuronal computation. Here we demonstrate that Hebbian synapses in realistically-modeled hippocampalpyramidal cells may give rise to two novel forms of self-organization in response to structured synaptic input. First, on the basis of the electrotonic relationships between synaptic contacts, a cell may become tuned to a small subset of its input space. Second, the same mechanisms may produce clusters of potentiated synapses across the space of the dendrites. The latter type of self-organization may be functionally significant in the presence of nonlinear dendritic conductances.


Planning with an Adaptive World Model

Neural Information Processing Systems

We present a new connectionist planning method [TML90]. By interaction with an unknown environment, a world model is progressively constructed usinggradient descent. For deriving optimal actions with respect to future reinforcement, planning is applied in two steps: an experience network proposesa plan which is subsequently optimized by gradient descent with a chain of world models, so that an optimal reinforcement may be obtained when it is actually run. The appropriateness of this method is demonstrated by a robotics application and a pole balancing task.



An Analog VLSI Splining Network

Neural Information Processing Systems

Waltham, MA 02254 Abstract We have produced a VLSI circuit capable of learning to approximate arbitrary smoothof a single variable using a technique closely related to splines. The circuit effectively has 512 knots space on a uniform grid and has full support for learning. The circuit also can be used to approximate multi-variable functions as sum of splines. An interesting, and as of yet, nearly untapped set of applications for VLSI implementation ofneural network learning systems can be found in adaptive control and nonlinear signal processing. In most such applications, the learning task consists of approximating a real function of a small number of continuous variables from discrete data points.