Plotting

Estimating analogical similarity by dot-products of Holographic Reduced Representations

Neural Information Processing Systems

Gentner and Markman (1992) suggested that the ability to deal with analogy will be a "Watershed or Waterloo" for connectionist models. They identified "structural alignment" as the central aspect of analogy making. They noted the apparent ease with which people can perform structural alignment in a wide variety of tasks and were pessimistic about the prospects for the development of a distributed connectionist model that could be useful in performing structural alignment. In this paper I describe how Holographic Reduced Representations (HRRs) (Plate, 1991; Plate, 1994), a fixed-width distributed representation for nested structures, can be used to obtain fast estimates of analogical similarity.


The Role of MT Neuron Receptive Field Surrounds in Computing Object Shape from Velocity Fields

Neural Information Processing Systems

Hoever, the relation between our ability to perceive structure in stimuli, simulating 3-D objects, and neuronal properties has not been addressed up to date. Here we discuss a possibility, that area MT RF surrounds might be involved in shape-frommotion perception. We introduce a simplifying model of MT neurons and analyse the implications to SFM problem solving. 2 REDEFINING THE SFM PROBLEM 2.1 RELATIVE MOTION AS A CUE FOR RELATIVE DEPTH Since Helmholtz motion parallax is known to be a powerful cue providing information about both the structure of the surrounding environment and the direction of self-motion. On the other hand, moving objects also induce velocity fields allowing judgement about their shapes. We can capture both cases by assuming that an observer is tracking a point on a surface of interest.


Event-Driven Simulation of Networks of Spiking Neurons

Neural Information Processing Systems

A fast event-driven software simulator has been developed for simulating large networks of spiking neurons and synapses. The primitive network elements are designed to exhibit biologically realistic behaviors, such as spiking, refractoriness, adaptation, axonal delays, summation of post-synaptic current pulses, and tonic current inputs. The efficient event-driven representation allows large networks to be simulated in a fraction of the time that would be required for a full compartmental-model simulation. Corresponding analog CMOS VLSI circuit primitives have been designed and characterized, so that large-scale circuits may be simulated prior to fabrication. 1 Introduction Artificial neural networks typically use an abstraction of real neuron behaviour, in which the continuously varying mean firing rate of the neuron is presumed to carry the information about the neuron's time-varying state of excitation [1]. This useful simplification allows the neuron's state to be represented as a time-varying continuous-amplitude quantity.


Decoding Cursive Scripts

Neural Information Processing Systems

Online cursive handwriting recognition is currently one of the most intriguing challenges in pattern recognition. This study presents a novel approach to this problem which is composed of two complementary phases. The first is dynamic encoding of the writing trajectory into a compact sequence of discrete motor control symbols. In this compact representation we largely remove the redundancy of the script, while preserving most of its intelligible components. In the second phase these control sequences are used to train adaptive probabilistic acyclic automata (PAA) for the important ingredients of the writing trajectories, e.g.


An Optimization Method of Layered Neural Networks based on the Modified Information Criterion

Neural Information Processing Systems

This paper proposes a practical optimization method for layered neural networks, by which the optimal model and parameter can be found simultaneously. 'i\Te modify the conventional information criterion into a differentiable function of parameters, and then, minimize it, while controlling it back to the ordinary form. Effectiveness of this method is discussed theoretically and experimentally.


Connectionism for Music and Audition

Neural Information Processing Systems

In recent years, NIPS has heard neural networks generate tunes and harmonize chorales. With a large amount of music becoming available in computer readable form, real data can be used to train connectionist models. At the beginning of this workshop, Andreas Weigend focused on architectures to capture structure on multiple time scales.


The Parti-Game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-Spaces

Neural Information Processing Systems

Parti-game is a new algorithm for learning from delayed rewards in high dimensional real-valued state-spaces. In high dimensions it is essential that learning does not explore or plan over state space uniformly. Part i-game maintains a decision-tree partitioning of state-space and applies game-theory and computational geometry techniques to efficiently and reactively concentrate high resolution only on critical areas. Many simulated problems have been tested, ranging from 2-dimensional to 9-dimensional state-spaces, including mazes, path planning, nonlinear dynamics, and uncurling snake robots in restricted spaces. In all cases, a good solution is found in less than twenty trials and a few minutes. 1 REINFORCEMENT LEARNING Reinforcement learning [Samuel, 1959, Sutton, 1984, Watkins, 1989, Barto et al., 1991] is a promising method for control systems to program and improve themselves.


Digital Boltzmann VLSI for constraint satisfaction and learning

Neural Information Processing Systems

We built a high-speed, digital mean-field Boltzmann chip and SBus board for general problems in constraint satjsfaction and learning. Each chip has 32 neural processors and 4 weight update processors, supporting an arbitrary topology of up to 160 functional neurons. On-chip learning is at a theoretical maximum rate of 3.5 x 10


Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells

Neural Information Processing Systems

About half the pyramidal neurons in layer 5 of neocortex have long apical dendrites that arborize extensively in layers 1-3. There the dendrites receive synaptic input from the inter-areal feedback projections (Felleman and van Essen, 1991) that play an important role in many models of brain function (Rockland and Virga, 1989). At first sight this seems to be an unsatisfactory arrangement. In light of traditional passive models of dendritic function the distant inputs cannot have a significant effect on the output discharge of the pyramidal cell. The distal inputs are at least one to two space constants removed from the soma in layer 5 and so only a small fraction of the voltage signal will reach there.


Neural Network Methods for Optimization Problems

Neural Information Processing Systems

In a talk entitled "Trajectory Control of Convergent Networks with applications to TSP", Natan Peterfreund (Computer Science, Technion) dealt with the problem of controlling the trajectories of continuous convergent neural networks models for solving optimization problems, without affecting their equilibria set and their convergence properties. Natan presented a class of feedback control functions which achieve this objective, while also improving the convergence rates. A modified Hopfield and Tank neural network model, developed through the proposed feedback approach, was found to substantially improve the results of the original model in solving the Traveling Salesman Problem. The proposed feedback overcame the 2n symmetric property of the TSP problem. In a talk entitled "Training Feedforward Neural Networks quickly and accurately using Very Fast Simulated Reannealing Methods", Bruce Rosen (Asst.