Neural Information Processing Systems
Learning by Choice of Internal Representations
Grossman, Tal, Meir, Ronny, Domany, Eytan
We introduce a learning algorithm for multilayer neural networks composedof binary linear threshold elements. Whereas existing algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations asthe fundamental entities to be determined. Once a correct set of internal representations is arrived at, the weights are found by the local aild biologically plausible Perceptron Learning Rule (PLR). We tested our learning algorithm on four problems: adjacency, symmetry, parity and combined symmetry-parity.
Modeling Small Oscillating Biological Networks in Analog VLSI
Ryckebusch, Sylvie, Bower, James M., Mead, Carver
We have used analog VLSI technology to model a class of small oscillating biologicalneural circuits known as central pattern generators (CPG). These circuits generate rhythmic patterns of activity which drive locomotor behaviour in the animal. We have designed, fabricated, and tested a model neuron circuit which relies on many of the same mechanisms as a biological central pattern generator neuron, such as delays and internal feedback. We show that this neuron can be used to build several small circuits based on known biological CPG circuits, and that these circuits produce patterns of output which are very similar to the observed biological patterns. To date, researchers in applied neural networks have tended to focus on mammalian systemsas the primary source of potentially useful biological information.
A Massively Parallel Self-Tuning Context-Free Parser
ABSTRACT The Parsing and Learning System(PALS) is a massively parallel self-tuning context-free parser. It is capable of parsing sentences of unbounded length mainly due to its parse-tree representation scheme. The system is capable of improving its parsing performance through the presentation of training examples. INTRODUCTION Recent PDP research[Rumelhart et al.- 1986; Feldman and Ballard, 1982; Lippmann, 1987] involving natural language processtng[Fanty, 1988; Selman, 1985; Waltz and Pollack, 1985] have unrealistically restricted sentences to a fixed length. A solution to this problem was presented in the system CONPARSE[Charniak and Santos.
Linear Learning: Landscapes and Algorithms
In particular we examine what happens when the ntunber of layers is large or when the connectivity between layers is local and investigate some of the properties of an autoassociative algorithm. Notation will be as in [1] where additional motivations and references can be found. It is usual to criticize linear networks because "linear functions do not compute" and because several layers can always be reduced to one by the proper multiplication of matrices. However this is not the point of view adopted here. It is assumed that the architecture of the network is given (and could perhaps depend on external constraints) and the purpose is to understand what happens during the learning phase, what strategies are adopted by a synaptic weights modifying algorithm, ... [see also Cottrell et al. (1988) for an example of an application andthe work of Linsker (1988) on the emergence of feature detecting units in linear networks}.
Neural Net Receivers in Multiple Access-Communications
Paris, Bernd-Peter, Orsak, Geoffrey, Varanasi, Mahesh, Aazhang, Behnaam
The application of neural networks to the demodulation of spread-spectrum signals in a multiple-access environment is considered. This study is motivated in large part by the fact that, in a multiuser system, the conventional (matched filter) receiversuffers severe performance degradation as the relative powers of the interfering signals become large (the "near-far" problem). Furthermore, the optimum receiver, which alleviates the near-far problem, is too complex to be of practical use. Receivers based on multi-layer perceptrons are considered as a simple and robust alternative to the optimum solution.The optimum receiver is used to benchmark the performance of the neural net receiver; in particular, it is proven to be instrumental in identifying the decision regions of the neural networks. The back-propagation algorithm and a modified version of it are used to train the neural net. An importance sampling technique is introduced to reduce the number of simulations necessary to evaluate the performance of neural nets.
GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection
Cun, Yann Le, Galland, Conrad C., Hinton, Geoffrey E.
Learning procedures that measure how random perturbations of unit activities correlatewith changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect theoutput error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement proceduresbut is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearitiesin the system. Two simulations demonstrate the effectiveness of GEMINI.
An Analog Self-Organizing Neural Network Chip
Mann, James R., Gilbert, Sheldon
This paper describes an analog version of a self-organizing feature map circuit. The design implements Kohonen's self-organizing feature map algorithm [Kohonen, 1988] with some modifications imposed by practical circuit limitations. The feature map algorithm automatically adapts connection weights to nodes in the network such that each node comes to represent a distinct class of features in the input space. The system also self-organizes such that neighboring nodes become responsive to similar input classes. The prototype circuit was fabricated in two parts (for testability); a 4 node, 4 input synaptic array, and a weight adaptation and refresh circuit.