Goto

Collaborating Authors

 Country


Salient Contour Extraction by Temporal Binding in a Cortically-based Network

Neural Information Processing Systems

It has been suggested that long-range intrinsic connections in striate cortex may play a role in contour extraction (Gilbert et aI., 1996). A number of recent physiological and psychophysical studies have examined the possible role of long range connections in the modulation of contrast detection thresholds (Polat and Sagi, 1993,1994; Kapadia et aI., 1995; Kovacs and Julesz, 1994) and various pre-attentive detection tasks (Kovacs and Julesz, 1993; Field et aI., 1993). We have developed a network architecture based on the anatomical connectivity of striate cortex, as well as the temporal dynamics of neuronal processing, that is able to reproduce the observed experimental results. The network has been tested on real images and has applications in terms of identifying salient contours in automatic image processing systems. 1 INTRODUCTION Vision is an active process, and one of the earliest, preattentive actions in visual processing is the identification of the salient contours in a scene. We propose that this process depends upon two properties of striate cortex: the pattern of horizontal connections between orientation columns, and temporal synchronization of cell responses. In particular, we propose that perceptual salience is directly related to the degree of cell synchronization. We present results of network simulations that account for recent physiological and psychophysical "pop-out" experiments, and which successfully extract salient contours from real images.


Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input

Neural Information Processing Systems

Recently, there has been a vigorous debate concerning the nature of neural coding (Rieke et al. 1996; Stevens and Zador 1995; Shadlen and Newsome 1994). The prevailing viewhas been that the mean firing rate conveys all information about the sensory stimulus in a spike train and the precise timing of the individual spikes is noise. This belief is, in part, based on a lack of correlation between the precise timing ofthe spikes and the sensory qualities of the stimulus under study, particularly, on a lack of spike timing repeatability when identical stimulation is delivered. This view has been challenged by a number of recent studies, in which highly repeatable temporal patterns of spikes can be observed both in vivo (Bair and Koch 1996; Abeles et al. 1993) and in vitro (Mainen and Sejnowski 1994). Furthermore, application ofinformation theory to the coding problem in the frog and house fly (Bialek et al. 1991; Bialek and Rieke 1992) suggested that additional information could be extracted from spike timing. In the absence of direct evidence for a timing code in the cerebral cortex, the role of spike timing in neural coding remains controversial.


Separating Style and Content

Neural Information Processing Systems

We seek to analyze and manipulate two factors, which we call style and content, underlying a set of observations. We fit training data with bilinear models which explicitly represent the two-factor structure. Thesemodels can adapt easily during testing to new styles or content, allowing us to solve three general tasks: extrapolation of a new style to unobserved content; classification of content observed in a new style; and translation of new content observed in a new style.


Smoothing Regularizers for Projective Basis Function Networks

Neural Information Processing Systems

Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis junctions (PBFs), such as the widely-used sigmoidal PBFs, have heretofore been proposed. We derive newclasses of algebraically-simple mH'-order smoothing regularizers for networks of the form f(W, x)


Predicting Lifetimes in Dynamically Allocated Memory

Neural Information Processing Systems

Predictions oflifetimes of dynamically allocated objects can be used to improve time and space efficiency of dynamic memory management incomputer programs. Barrett and Zorn [1993] used a simple lifetime predictor and demonstrated this improvement on a variety of computer programs. In this paper, we use decision trees to do lifetime prediction on the same programs and show significantly better prediction. Our method also has the advantage that during training we can use a large number of features and let the decision tree automatically choose the relevant subset.


A Micropower Analog VLSI HMM State Decoder for Wordspotting

Neural Information Processing Systems

We describe the implementation of a hidden Markov model state decoding system, a component for a wordspotting speech recognition system.The key specification for this state decoder design is microwatt power dissipation; this requirement led to a continuoustime, analogcircuit implementation. We characterize the operation of a 10-word (81 state) state decoder test chip.


Approximate Solutions to Optimal Stopping Problems

Neural Information Processing Systems

We propose and analyze an algorithm that approximates solutions to the problem of optimal stopping in a discounted irreducible aperiodic Markovchain. The scheme involves the use of linear combinations offixed basis functions to approximate a Q-function. The weights of the linear combination are incrementally updated through an iterative process similar to Q-Iearning, involving simulation ofthe underlying Markov chain. Due to space limitations, we only provide an overview of a proof of convergence (with probability 1)and bounds on the approximation error. This is the first theoretical result that establishes the soundness of a Q-Iearninglike algorithmwhen combined with arbitrary linear function approximators tosolve a sequential decision problem.


Self-Organizing and Adaptive Algorithms for Generalized Eigen-Decomposition

Neural Information Processing Systems

The paper is developed in two parts where we discuss a new approach to self-organization in a single-layer linear feed-forward network. First, two novel algorithms for self-organization are derived from a two-layer linear hetero-associative network performing a one-of-m classification, and trained with the constrained least-mean-squared classification error criterion. Second, two adaptive algorithms are derived from these selforganizing procedures to compute the principal generalized eigenvectors of two correlation matrices from two sequences of random vectors. These novel adaptive algorithms can be implemented in a single-layer linear feed-forward network. We give a rigorous convergence analysis of the adaptive algorithms by using stochastic approximation theory. As an example, we consider a problem of online signal detection in digital mobile communications.


A Model of Recurrent Interactions in Primary Visual Cortex

Neural Information Processing Systems

A general feature of the cerebral cortex is its massive interconnectivity - it has been estimated anatomically [19] that cortical neurons receive upwards of 5,000 synapses, the majority of which originate from other nearby cortical neurons. Numerous experiments in primary visual cortex (VI) have revealed strongly nonlinear interactions between stimulus elements which activate classical and nonclassical receptive field regions. Recurrent cortical connections likely contribute substantially to these effects. However, most theories of visual processing have either assumed a feedforward processing scheme [7], or have used recurrent interactions to account for isolated effects only [1, 16, 18]. Since nonlinear systems cannot in general be taken apart and analyzed in pieces, it is not clear what one learns by building a recurrent model that only accounts for one, or very few phenomena. Here we develop a relatively simple model of recurrent interactions in VI, that reflects major anatomical and physiological features of intracortical connectivity, and simultaneously accounts for a wide range of phenomena observed physiologically. All phenomena we address are strongly nonlinear, and cannot be explained by linear feedforward models.


Recursive Algorithms for Approximating Probabilities in Graphical Models

Neural Information Processing Systems

Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Abstract We develop a recursive node-elimination formalism for efficiently approximating large probabilistic networks. No constraints are set on the network topologies. Yet the formalism can be straightforwardly integratedwith exact methods whenever they are/become applicable. The approximations we use are controlled: they maintain consistentlyupper and lower bounds on the desired quantities at all times. We show that Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) can be handled within the same framework.