Plotting

 Country


Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation

Neural Information Processing Systems

The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient The method is novel in that it achieves a computational complexitysimilar to that of Node Perturbation, O(N3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware requirements whenused for the training ofVLSI implementations of ANN's.


Biologically Plausible Local Learning Rules for the Adaptation of the Vestibulo-Ocular Reflex

Neural Information Processing Systems

Lisberger Department of Physiology W.M. Keck Foundation Center for Integrative Neuroscience University of California, San Fransisco, CA, 94143 Abstract The vestibulo-ocular reflex (VOR) is a compensatory eye movement that stabilizes images on the retina during head turns. Its magnitude, or gain, can be modified by visual experience during head movements. Possible learning mechanisms for this adaptation have been explored in a model of the oculomotor system based on anatomical and physiological constraints. Thelocal correlational learning rules in our model reproduce the adaptation and behavior of the VOR under certain parameter conditions. From these conditions, predictions for the time course of adaptation at the learning sites are made. 1 INTRODUCTION The primate oculomotor system is capable of maintaining the image of an object on the fovea even when the head and object are moving simultaneously.


An Object-Oriented Framework for the Simulation of Neural Nets

Neural Information Processing Systems

The field of software simulators for neural networks has been expanding veryrapidly in the last years but their importance is still being underestimated. They must provide increasing levels of assistance forthe design, simulation and analysis of neural networks. With our object-oriented framework (SESAME) we intend to show that very high degrees of transparency, manageability and flexibility forcomplex experiments can be obtained. SESAME's basic design philosophyis inspired by the natural way in which researchers explain their computational models. Experiments are performed with networks of building blocks, which can be extended very easily. Mechanismshave been integrated to facilitate the construction and analysis of very complex architectures.


Bayesian Learning via Stochastic Dynamics

Neural Information Processing Systems

The attempt to find a single "optimal" weight vector in conventional networktraining can lead to overfitting and poor generalization. Bayesian methods avoid this, without the need for a validation set, by averaging the outputs of many networks with weights sampled from the posterior distribution given the training data. This sample can be obtained by simulating a stochastic dynamical system that has the posterior as its stationary distribution.


Probability Estimation from a Database Using a Gibbs Energy Model

Neural Information Processing Systems

We present an algorithm for creating a neural network which produces accurateprobability estimates as outputs. The network implements aGibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory of this transformation is presented together with experimental results. Oneadvantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of databases.


Destabilization and Route to Chaos in Neural Networks with Random Connectivity

Neural Information Processing Systems

The occurence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function but independent of the connectivity, that allows a sustained activity and the occurence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is (Hopf bifurcation, pitchfork or flip).



A Knowledge-Based Model of Geometry Learning

Neural Information Processing Systems

We propose a model of the development of geometric reasoning in children that explicitly involves learning. The model uses a neural network that is initialized with an understanding of geometry similar to that of second-grade children. Through the presentation of a series of examples, the model is shown to develop an understanding of geometry similar to that of fifth-grade children who were trained using similar materials.


Object-Based Analog VLSI Vision Circuits

Neural Information Processing Systems

We describe two successfully working, analog VLSI vision circuits that move beyond pixel-based early vision algorithms. One circuit, implementing the dynamic wires model, provides for dedicated lines of communication among groups of pixels that share a common property. The chip uses the dynamic wires model to compute the arclength of visual contours. Another circuit labels all points inside a given contour with one voltage and all other with another voltage. Itsbehavior is very robust, since small breaks in contours are automatically sealed, providing for Figure-Ground segregation in a noisy environment. Both chips are implemented using networks of resistors and switches and represent a step towards object level processing since a single voltage value encodes the property of an ensemble of pixels.


Single-Iteration Threshold Hamming Networks

Neural Information Processing Systems

Isaac Meilijson EytanRuppin Moshe Sipper School of Mathematical Sciences Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, 69978 Tel Aviv, Israel Abstract We analyze in detail the performance of a Hamming network classifying inputsthat are distorted versions of one of its m stored memory patterns. The activation function of the memory neurons in the original Hamming network is replaced by a simple threshold function. The THN drastically reduces the time and space complexity of Hamming Network classifiers. 1 Introduction Originally presented in (Steinbuch 1961, Taylor 1964) the Hamming network (HN) has received renewed attention in recent years (Lippmann et. The HN calculates the Hamming distance between the input pattern and each memory pattern, and selects the memory with the smallest distance. It is composed of two subnets: The similarity subnet, consisting of an n-neuron input layer connected with an m-neuron memory layer, calculates the number of equal bits between the input and each memory pattern.