Effects of Spike Timing Underlying Binocular Integration and Rivalry in a Neural Model of Early Visual Cortex

Neural Information Processing Systems

In normal vision, the inputs from the two eyes are integrated intoa single percept. When dissimilar images are presented to the two eyes, however, perceptual integration givesway to alternation between monocular inputs, a phenomenon called binocular rivalry. Although recent evidence indicates that binocular rivalry involves a modulation ofneuronal responses in extrastriate cortex, the basic mechanisms responsible for differential processing of con:6.icting


Mapping a Manifold of Perceptual Observations

Neural Information Processing Systems

Nonlinear dimensionality reduction is formulated here as the problem of trying to find a Euclidean feature-space embedding of a set of observations that preserves as closely as possible their intrinsic metric structure - the distances between points on the observation manifold as measured along geodesic paths.


A 1, 000-Neuron System with One Million 7-bit Physical Interconnections

Neural Information Processing Systems

An asynchronous PDM (Pulse-Density-Modulating) digital neural network system has been developed in our laboratory. It consists of one thousand neurons that are physically interconnected via one million 7-bit synapses. It can solve one thousand simultaneous nonlinear first-order differential equations in a fully parallel and continuous fashion. The performance of this system was measured by a winner-take-all network with one thousand neurons. Although the magnitude of the input and network parameters were identical foreach competing neuron, one of them won in 6 milliseconds.


Blind Separation of Radio Signals in Fading Channels

Neural Information Processing Systems

We apply information maximization / maximum likelihood blind source separation [2, 6) to complex valued signals mixed with complex valuednonstationary matrices. This case arises in radio communications withbaseband signals. We incorporate known source signal distributions in the adaptation, thus making the algorithms less "blind". This results in drastic reduction of the amount of data needed for successful convergence. Adaptation to rapidly changing signal mixing conditions, such as to fading in mobile communications, becomesnow feasible as demonstrated by simulations. 1 Introduction In SDMA (spatial division multiple access) the purpose is to separate radio signals of interfering users (either intentional or accidental) from each others on the basis of the spatial characteristics of the signals using smart antennas, array processing, and beamforming [5, 8).


On-line Learning from Finite Training Sets in Nonlinear Networks

Neural Information Processing Systems

Online learning is one of the most common forms of neural network training.We present an analysis of online learning from finite training sets for nonlinear networks (namely, soft-committee machines), advancingthe theory to more realistic learning scenarios. Dynamical equations are derived for an appropriate set of order parameters; these are exact in the limiting case of either linear networks or infinite training sets. Preliminary comparisons with simulations suggest that the theory captures some effects of finite training sets, but may not yet account correctly for the presence of local minima.


An Analog VLSI Model of the Fly Elementary Motion Detector

Neural Information Processing Systems

Flies are capable of rapidly detecting and integrating visual motion information inbehaviorly-relevant ways. The first stage of visual motion processing in flies is a retinotopic array of functional units known as elementary motiondetectors (EMDs). Several decades ago, Reichardt and colleagues developed a correlation-based model of motion detection that described the behavior of these neural circuits. We have implemented a variant of this model in a 2.0-JLm analog CMOS VLSI process. The result isa low-power, continuous-time analog circuit with integrated photoreceptors thatresponds to motion in real time. The responses of the circuit to drifting sinusoidal gratings qualitatively resemble the temporal frequency response, spatial frequency response, and direction selectivity of motion-sensitive neurons observed in insects. In addition to its possible engineeringapplications, the circuit could potentially be used as a building block for constructing hardware models of higher-level insect motion integration.


Factorizing Multivariate Function Classes

Neural Information Processing Systems

The mathematical framework for factorizing equivalence classes of multivariate functions is formulated in this paper. Independent component analysis is shown to be a special case of this decomposition.


Generalization in Decision Trees and DNF: Does Size Matter?

Neural Information Processing Systems

Recent theoretical results for pattern classification with thresholded real-valuedfunctions (such as support vector machines, sigmoid networks,and boosting) give bounds on misclassification probability that do not depend on the size of the classifier, and hence can be considerably smaller than the bounds that follow from the VC theory. In this paper, we show that these techniques can be more widely applied, by representing other boolean functions as two-layer neural networks (thresholded convex combinations of boolean functions).


Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks

Neural Information Processing Systems

A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations.The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics. The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[l, 2].


Adaptive Choice of Grid and Time in Reinforcement Learning

Neural Information Processing Systems

Consistency problems arise if the discretization needs to be refined, e.g. for more accuracy, application of multi-grid iteration or better starting values for the iteration of the approximate optimal value function. In [7] it was shown, that for diffusion dominated problems, a state to time discretization ratio k/ h of Ch'r, I