Plotting

 Roychowdhury, V. P.


A Rigorous Analysis of Linsker-type Hebbian Learning

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this approach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields. Our theoretical predictions are confirmed by numerical simulations.


A Rigorous Analysis of Linsker-type Hebbian Learning

Neural Information Processing Systems

His simulations have shown that for appropriate parameter regimes, several structured connection patterns (e.g., centre-surround and oriented afferent receptive fields (aRFs)) occur progressively as the Hebbian evolution of the weights is carried out layer by layer. The behavior of Linsker's model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses.


Complexity Issues in Neural Computation and Learning

Neural Information Processing Systems

The general goal of this workshop was to bring t.ogether researchers working toward developing a theoretical framework for the analysis and design of neural networks. The t.echnical focus of the workshop was to address recent. The primary topics addressed the following three areas: 1) Computational complexity issues in neural networks, 2) Complexity issues in learning, and 3) Convergence and numerical properties of learning algorit.hms. Such st.udies, in t.urn, have generated considerable research interest. A similar development can be observed in t.he area of learning as well: Techniques primarily developed in the classical theory of learning are being applied to understand t.he generalization and learning characteristics of neural networks.


Complexity Issues in Neural Computation and Learning

Neural Information Processing Systems

The general goal of this workshop was to bring t.ogether researchers working toward developing a theoretical framework for the analysis and design of neural networks. The t.echnical focus of the workshop was to address recent. The primary topics addressed the following three areas: 1) Computational complexityissues in neural networks, 2) Complexity issues in learning, and 3) Convergence and numerical properties of learning algorit.hms. Such st.udies, in t.urn, have generated considerable research interest. A similar development can be observed in t.he area of learning as well: Techniques primarily developed in the classical theory of learning are being applied to understand t.he generalization and learning characteristics of neural networks.


On the Circuit Complexity of Neural Networks

Neural Information Processing Systems

Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.


On the Circuit Complexity of Neural Networks

Neural Information Processing Systems

Viewing n-variable boolean functions as vectors in'R'2", we invoke tools from linear algebra and linear programming to derive new results on the realizability of boolean functions using threshold gat.es. Using this approach, one can obtain: (1) upper-bounds on the number of spurious memories in HopfielJ networks, and on the number of functions implementable by a depth-d threshold circuit; (2) a lower bound on the number of ort.hogonal input.