Goto

Collaborating Authors

 Country


Spatial Organization of Neural Networks: A Probabilistic Modeling Approach

Neural Information Processing Systems

ABSTRACT The aim of this paper is to explore the spatial organization of neural networks under Markovian assumptions, in what concerns the behaviour ofindividual cells and the interconnection mechanism. Spaceorganizational propertiesof neural nets are very relevant in image modeling and pattern analysis, where spatial computations on stochastic two-dimensionalimage fields are involved. As a first approach we develop a random neural network model, based upon simple probabilistic assumptions,whose organization is studied by means of discrete-event simulation.We then investigate the possibility of approXimating therandom network's behaviour by using an analytical approach originating from the theory of general product-form queueing networks. The neural network is described by an open network of nodes, inwhich customers moving from node to node represent stimulations andconnections between nodes are expressed in terms of suitably selectedrouting probabilities. We obtain the solution of the model under different disciplines affecting the time spent by a stimulation ateach node visited.


Computing Motion Using Resistive Networks

Neural Information Processing Systems

We open our eyes and we "see" the world in all its color, brightness, and movement. Yet, we have great difficulties when trying to endow our machines with similar abilities. In this paper we shall describe recent developments in the theory of early vision which lead from the formulation of the motion problem as an illposed oneto its solution by minimizing certain "cost" functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. Thus, we shall see how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems. APERTURE PROBLEM AND SMOOTHNESS ASSUMPTION In this study, we use intensity-based schemes for recovering motion.


A Mean Field Theory of Layer IV of Visual Cortex and Its Application to Artificial Neural Networks

Neural Information Processing Systems

ABSTRACT A single cell theory for the development of selectivity and ocular dominance in visual cortex has been presented previously by Bienenstock, Cooper and Munrol. This has been extended to a network applicable to layer IV of visual cortex2 . In this paper we present a mean field approximation that captures in a fairly transparent manner the qualitative, and many of the quantitative, results of the network theory. Finally, we consider the application of this theory to artificial neural networks and show that a significant reduction in architectural complexity is possible. ASINGLE LAYER NETWORK AND THE MEAN FIELD APPROXIMATION We consider a receive signals from the layer (Figure 1).


Speech Recognition Experiments with Perceptrons

Neural Information Processing Systems

This paper looks at two more difficult vocabularies, the alphabetic E-set and a set of polysyllabic words. The E-set is difficult because it contains weak discriminants and polysyllables are difficult because of timing variation. Polysyllabic word recognition is aided by a time pre-alignment technique based on dynamic programming andE-set recognition is improved by focusing attention. Recognition accuracies are better than 98% for both vocabularies when implemented with a single layer perceptron. INTRODUCTION Artificial neural networks perform well on simple pattern recognition tasks.


Mathematical Analysis of Learning Behavior of Neuronal Models

Neural Information Processing Systems

Please address all further correspondence to: John Y. Cheung School of EECS 202 W. Boyd, CEC 219 Norman, OK 73019 (405)325-4721 November,1987 American Institute of Physics 1988 165 MATHEMATICAL ANALYSIS OF LEARNING BEHAVIOR OF NEURONAL MODELS John Y. Cheung and Massoud Omidvar School of Electrical Engineering and Computer Science ABSTRACT In this paper, we wish to analyze the convergence behavior of a number of neuronal plasticity models. Recent neurophysiological research suggests that the neuronal behavior is adaptive. In particular, memory stored within a neuron is associated with the synaptic weights which are varied or adjusted to achieve learning. A number of adaptive neuronal models have been proposed in the literature. Three specific models will be analyzed in this paper, specifically the Hebb model, the Sutton-Barto model, and the most recent trace model.


On the Power of Neural Networks for Solving Hard Problems

Neural Information Processing Systems

The neural network model is a discrete time system that can be represented by a weighted and undirected graph. There is a weight attached to each edge of the graph and a threshold value attached to each node (neuron) of the graph. American Institute of Physics 1988 138 Theorder of the network is the number of nodes in the corresponding graph.


MURPHY: A Robot that Learns by Doing

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8].


Neural Network Implementation Approaches for the Connection Machine

Neural Information Processing Systems

Two approaches are described which allow parallel computation of a model's nonlinear functions, parallel modification of a model's weights, and parallel propagation of a model's activation and error. Each approach also allows a model's interconnect structure to be physically dynamic. A Hopfield model is implemented with each approach at six sizes over the same number of CM processors to provide a performance comparison. INTRODUCflON Simulations of neural network models on digital computers perform various computations by applying linear or nonlinear functions, defined in a program, to weighted sums of integer or real numbers retrieved and stored by array reference. The numerical values are model dependent parameters like time averaged spiking frequency (activation), synaptic efficacy (weight), the error in error back propagation models, and computational temperature in thermodynamic models. The interconnect structure of a particular model is implied by indexing relationships between arrays defined in a program. On the Connection Machine (CM), these relationships are expressed in hardware processors interconnected by a 16-dimensional hypercube communication network. Mappings are constructed to defme higher dimensional interconnectivity between processors on top of the fundamental geometry of the communication network.


Constrained Differential Optimization

Neural Information Processing Systems

Many optimization models of neural networks need constraints to restrict the space of outputs to a subspace which satisfies external criteria. Optimizations using energy methods yield "forces" which act upon the state of the neural network. The penalty method, in which quadratic energy constraints are added to an existing optimization energy, has become popular recently, but is not guaranteed to satisfy the constraint conditions when there are other forces on the neural model or when there are multiple constraints. In this paper, we present the basic differential multiplier method (BDMM), which satisfies constraints exactly; we create forces which gradually apply the constraints over time, using "neurons" that estimate Lagrange multipliers. The basic differential multiplier method is a differential version of the method of multipliers from Numerical Analysis.


A Novel Net that Learns Sequential Decision Process

Neural Information Processing Systems

We propose a new scheme to construct neural networks to classify patterns. Thenew scheme has several novel features: 1. We focus attention on the important attributes of patterns in ranking order.