Predicting Complex Behavior in Sparse Asymmetric Networks

Neural Information Processing Systems

Recurrent networks of threshold elements have been studied intensively asassociative memories and pattern-recognition devices. While most research has concentrated on fully-connected symmetric networks.


Destabilization and Route to Chaos in Neural Networks with Random Connectivity

Neural Information Processing Systems

The occurence of chaos in recurrent neural networks is supposed to depend on the architecture and on the synaptic coupling strength. It is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent on this variance and on the slope of the transfer function but independent of the connectivity, that allows a sustained activity and the occurence of chaos when reaching a critical value. Even for weak connectivity and small size, we find numerical results in accordance with the theoretical ones previously established for fully connected infinite sized networks. Moreover the route towards chaos is numerically checked to be a quasi-periodic one, whatever the type of the first bifurcation is (Hopf bifurcation, pitchfork or flip).


On the Use of Evidence in Neural Networks

Neural Information Processing Systems

The Bayesian "evidence" approximation has recently been employed to determine the noise and weight-penalty terms used in back-propagation. This paper shows that for neural nets it is far easier to use the exact result than it is to use the evidence approximation. Moreover, unlike the evidence approximation,the exact result neither has to be re-calculated for every new data set, nor requires the running of computer code (the exact result is closed form). In addition, it turns out that the evidence procedure's MAPestimate for neural nets is, in toto, approximation error. Another advantage of the exact analysis is that it does not lead one to incorrect intuition, like the claim that using evidence one can "evaluate different priors in light of the data". This paper also discusses sufficiency conditions for the evidence approximation to hold, why it can sometimes give "reasonable" results, etc.


Probability Estimation from a Database Using a Gibbs Energy Model

Neural Information Processing Systems

We present an algorithm for creating a neural network which produces accurateprobability estimates as outputs. The network implements aGibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory of this transformation is presented together with experimental results. Oneadvantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of databases.


Statistical Mechanics of Learning in a Large Committee Machine

Neural Information Processing Systems

We use statistical mechanics to study generalization in large committee machines.For an architecture with nonoverlapping receptive fields a replica calculation yields the generalization error in the limit of a large number of hidden units.


Weight Space Probability Densities in Stochastic Learning: II. Transients and Basin Hopping Times

Neural Information Processing Systems

Genevieve B. Orr and Todd K. Leen Department of Computer Science and Engineering Oregon Graduate Institute of Science & Technology 19600 N.W. von Neumann Drive Beaverton, OR 97006-1999 Abstract In stochastic learning, weights are random variables whose time evolution is governed by a Markov process. We summarize the theory of the time evolution of P, and give graphical examples of the time evolution that contrast the behavior of stochastic learning with true gradient descent (batch learning). Finally, we use the formalism to obtain predictions of the time required for noise-induced hopping between basins of different optima. We compare the theoretical predictions with simulations of large ensembles of networks for simple problems in supervised and unsupervised learning. Despite the recent application of convergence theorems from stochastic approximation theoryto neural network learning (Oja 1982, White 1989) there remain outstanding questionsabout the search dynamics in stochastic learning.


Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain

Neural Information Processing Systems

We present the information-theoretic derivation of a learning algorithm that clusters unlabelled data with linear discriminants. In contrast to methods that try to preserve information about the input patterns, we maximize the information gained from observing the output of robust binary discriminators implemented with sigmoid nodes. We deri ve a local weight adaptation rule via gradient ascent in this objective, demonstrate its dynamics on some simple data sets, relate our approach to previous work and suggest directions in which it may be extended.


Some Solutions to the Missing Feature Problem in Vision

Neural Information Processing Systems

In visual processing the ability to deal with missing and noisy information iscrucial. Occlusions and unreliable feature detectors often lead to situations where little or no direct information about features is available. Howeverthe available information is usually sufficient to highly constrain the outputs. We discuss Bayesian techniques for extracting class probabilities given partial data. The optimal solution involves integrating overthe missing dimensions weighted by the local probability densities. We show how to obtain closed-form approximations to the Bayesian solution using Gaussian basis function networks.


The Computation of Stereo Disparity for Transparent and for Opaque Surfaces

Neural Information Processing Systems

The classical computational model for stereo vision incorporates a uniqueness inhibition constraint to enforce a one-to-one feature match, thereby sacrificing the ability to handle transparency. Critics ofthe model disregard the uniqueness constraint and argue that the smoothness constraint can provide the excitation support required for transparency computation.


Filter Selection Model for Generating Visual Motion Signals

Neural Information Processing Systems

We present a model of how MT cells aggregate responses from VI to form such a velocity representation. Two different sets of units, with local receptive fields, receive inputs from motion energy filters. One set of units forms estimates of local motion, while the second set computes the utility of these estimates. Outputs from this second set of units "gate" the outputs from the first set through a gain control mechanism. This active process for selecting only a subset of local motion responses to integrate into more global responses distinguishes our model from previous models of velocity estimation.