Goto

Collaborating Authors

 Machine Learning


Spreading Activation over Distributed Microfeatures

Neural Information Processing Systems

One att·empt at explaining human inferencing is that of spreading activat,ion, particularly in the st.ructured connectionist paradigm. This has resulted in t.he building of systems with semantically nameable nodes which perform inferencing by examining t.he pat,t.erns of activation spread.


Neural Architecture

Neural Information Processing Systems

Valentino Braitenberg Max Planck Institute Federal Republic of Germany While we are waiting for the ultimate biophysics of cell membranes and synapses to be completed, we may speculate on the shapes of neurons and on the patterns of their connections. Much of this will be significant whatever the outcome of future physiology. Take as an example the isotropy, anisotropy and periodicity of different kinds of neural networks. The very existence of these different types in different parts of the brain (or in different brains) defeats explanation in terms of embryology; the mechanisms of development are able to make one kind of network or another. The reasons for the difference must be in the functions they perform.


A Computationally Robust Anatomical Model for Retinal Directional Selectivity

Neural Information Processing Systems

We analyze a mathematical model for retinal directionally selective cells based on recent electrophysiological data, and show that its computation of motion direction is robust against noise and speed.


Adaptive Neural Networks Using MOS Charge Storage

Neural Information Processing Systems

However, to achieve the full power of a VLSI implementation of an adaptive algorithm, the learning operation must built into the circuit. We have fabricated and tested a circuit ideal for this purpose by connecting a pair of capacitors with a CCD like structure, allowing for variable size weight changes as well as a weight decay operation. A 2.51-' CMOS version achieves better than 10 bits of dynamic range in a 140/'





Self Organizing Neural Networks for the Identification Problem

Neural Information Processing Systems

This work introduces a new method called Self Organizing Neural Network (SONN) algorithm and demonstrates its use in a system identification task. The algorithm constructs the network, chooses the neuron functions, and adjusts the weights. It is compared to the Back-Propagation algorithm in the identification of the chaotic time series. The results shows that SONN constructs a simpler, more accurate model.


Neural Networks that Learn to Discriminate Similar Kanji Characters

Neural Information Processing Systems

Yoshihiro Morl Kazuhiko Yokosawa ATR Auditory and Visual Perception Research Laboratories 2-1-61 Shiromi Higashiku Osaka 540 Japan ABSTRACT A neural network is applied to the problem of recognizing Kanji characters. The recognition accuracy was higher than that of conventional methods. An analysis of connection weights showed that trained networks can discern the hierarchical structure of Kanji characters. This strategy of trained networks makes high recognition accuracy possible. Our results suggest that neural networks are very effective for Kanji character recognition. 1 INTRODUCTION Neural networks are applied to recognition tasks in many fields.


Efficient Parallel Learning Algorithms for Neural Networks

Neural Information Processing Systems

Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergenceproperties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm.These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system.