Goto

Collaborating Authors

 Country






Learnability and the Vapnik-Chervonenkis dimension

Classics

Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distribution-free convergence of certain pattern recognition algorithms. It is shown that the essential condition for distribution-free learnability is finiteness of the Vapnik-Chervonenkis dimension, a simple combinatorial parameter of the class of concepts to be learned. Using this parameter, the complexity and closure properties of learnable classes are analyzed, and the necessary and sufftcient conditions are provided for feasible learnability.JACM, 36 (4), 929-65


Discovering Structure from Motion in Monkey, Man and Machine

Neural Information Processing Systems

Using a parallel processing model,the current work explores how the biological visual system might solve this problem and how the neurophysiologist might go about understanding the solution.


Generalization of Back propagation to Recurrent and Higher Order Neural Networks

Neural Information Processing Systems

Fernando J. Pineda Applied Physics Laboratory, Johns Hopkins University Johns Hopkins Rd., Laurel MD 20707 Abstract A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form.


Neural Network Implementation Approaches for the Connection Machine

Neural Information Processing Systems

Two approaches are described which allow parallel computation of a model's nonlinear functions, parallel modification of a model's weights, and parallel propagation of a model's activation and error. Each approach also allows a model's interconnect structure to be physically dynamic. A Hopfield model is implemented with each approach at six sizes over the same number of CM processors to provide a performance comparison. INTRODUCflON Simulations of neural network models on digital computers perform various computations by applying linear or nonlinear functions, defined in a program, to weighted sums of integer or real numbers retrieved and stored by array reference. The numerical values are model dependent parameters like time averaged spiking frequency (activation), synaptic efficacy (weight), the error in error back propagation models, and computational temperature in thermodynamic models. The interconnect structure of a particular model is implied by indexing relationships between arrays defined in a program. On the Connection Machine (CM), these relationships are expressed in hardware processors interconnected by a 16-dimensional hypercube communication network. Mappings are constructed to defme higher dimensional interconnectivity between processors on top of the fundamental geometry of the communication network.


Time-Sequential Self-Organization of Hierarchical Neural Networks

Neural Information Processing Systems

Lateral inhibition operating in the surround of firing cells in each layer provides for unsupervised capture of excitation patterns presented by the previous layer. By presenting patterns of increasing complexity, in coordination with network selforganization, higherlevels of the hierarchy capture concepts implicit in the pattern set. INTRODUCTION A fundamental difficulty in self-organization of hierarchical, multi-layered, networks of simple neuron-like cells is the determination of the direction of adjustment of synaptic link weights between neural layers not directly connected to input or output patterns. Several different approaches have been used to address this problem. One is to provide teaching inputs to the cells in internal layers of the hierarchy.


A Computer Simulation of Cerebral Neocortex: Computational Capabilities of Nonlinear Neural Networks

Neural Information Processing Systems

American Institute of Physics 1988 716 Asynthetic neural network simulation of cerebral neocortex was developed based on detailed anatomy and physiology. Processing elements possess temporal nonlinearities and connection patterns similar to those of cortical neurons. The network was able to replicate spatial and temporal integration properties found experimentally in neocortex. A certain level of randomness was found to be crucial for the robustness of at least some of the network's computational capabilities. Emphasis was placed on how synthetic simulations can be of use to the study of both artificial and biological neural networks.