Goto

Collaborating Authors

 Country


Correlation Functions in a Large Stochastic Neural Network

Neural Information Processing Systems

In many cases the crosscorrelations between the activities of cortical neurons are approximately symmetric about zero time delay. These have been taken as an indication of the presence of "functional connectivity" between the correlated neurons (Fetz, Toyama and Smith 1991, Abeles 1991). However, a quantitative comparison between the observed cross-correlations and those expected to exist between neurons that are part of a large assembly of interacting population has been lacking. Most of the theoretical studies of recurrent neural network models consider only time averaged firing rates, which are usually given as solutions of mean-field equations. They do not account for the fluctuations about these averages, the study of which requires going beyond the mean-field approximations. In this work we perform a theoretical study of the fluctuations in the neuronal activities and their correlations, in a large stochastic network of excitatory and inhibitory neurons. Depending on the model parameters, this system can exhibit coherent undamped oscillations. Here we focus on parameter regimes where the system is in a statistically stationary state, which is more appropriate for modeling non oscillatory neuronal activity in cortex. Our results for the magnitudes and the time-dependence of the correlation functions can provide a basis for comparison with physiological data on neuronal correlation functions.


Bayesian Modeling and Classification of Neural Signals

Neural Information Processing Systems

Signal processing and classification algorithms often have limited applicability resulting from an inaccurate model of the signal's underlying structure. We present here an efficient, Bayesian algorithm for modeling a signal composed of the superposition of brief, Poisson-distributed functions. This methodology is applied to the specific problem of modeling and classifying extracellular neural waveforms which are composed of a superposition of an unknown number of action potentials CAPs). Previous approaches have had limited success due largely to the problems of determining the spike shapes, deciding how many are shapes distinct, and decomposing overlapping APs. A Bayesian solution to each of these problems is obtained by inferring a probabilistic model of the waveform. This approach quantifies the uncertainty of the form and number of the inferred AP shapes and is used to obtain an efficient method for decomposing complex overlaps. This algorithm can extract many times more information than previous methods and facilitates the extracellular investigation of neuronal classes and of interactions within neuronal circuits.


A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction

Neural Information Processing Systems

Researchers often try to understand-post hoc-representations that emerge in the hidden layers of a neural net following training. Interpretation is difficult because these representations are typically highly distributed and continuous. By "continuous," we mean that if one constructed a scatterplot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space.


Globally Trained Handwritten Word Recognizer using Spatial Representation, Convolutional Neural Networks, and Hidden Markov Models

Neural Information Processing Systems

We introduce a new approach for online recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution "annotated images" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors. 1 Introduction Natural handwriting is often a mixture of different "styles", lower case printed, upper case, and cursive.


Mixtures of Controllers for Jump Linear and Non-Linear Plants

Neural Information Processing Systems

To control such complex systems it is computationally more efficient to decompose the problem into smaller subtasks, with different control strategies for different operating points. When detailed information about the plant is available, gain scheduling has proven a successful method for designing a global control (Shamma and Athans, 1992). The system is partitioned by choosing several operating points and a linear model for each operating point. A controller is designed for each linear model and a method for interpolating or'scheduling' the gains of the controllers is chosen. The control problem becomes even more challenging when the system to be controlled is non-stationary, and the mode of the system is not explicitly observable.


Constructive Learning Using Internal Representation Conflicts

Neural Information Processing Systems

The first class of network adaptation algorithms start out with a redundant architecture and proceed by pruning away seemingly unimportant weights (Sietsma and Dow, 1988; Le Cun et aI, 1990). A second class of algorithms starts off with a sparse architecture and grows the network to the complexity required by the problem. Several algorithms have been proposed for growing feedforward networks. The upstart algorithm of Frean (1990) and the cascade-correlation algorithm of Fahlman (1990) are examples of this approach.


The Statistical Mechanics of k-Satisfaction

Neural Information Processing Systems

The satisfiability of random CNF formulae with precisely k variables per clause ("k-SAT") is a popular testbed for the performance of search algorithms. Formulae have M clauses from N variables, randomly negated, keeping the ratio a M / N fixed.


Probabilistic Anomaly Detection in Dynamic Systems

Neural Information Processing Systems

This paper describes probabilistic methods for novelty detection when using pattern recognition methods for fault monitoring of dynamic systems. The problem of novelty detection is particularly acute when prior knowledge and training data only allow one to construct an incomplete classification model. Allowance must be made in model design so that the classifier will be robust to data generated by classes not included in the training phase. For diagnosis applications one practical approach is to construct both an input density model and a discriminative class model. Using Bayes' rule and prior estimates of the relative likelihood of data of known and unknown origin the resulting classification equations are straightforward.


Processing of Visual and Auditory Space and Its Modification by Experience

Neural Information Processing Systems

Visual spatial information is projected from the retina to the brain in a highly topographic fashion, so that 2-D visual space is represented in a simple retinotopic map. Auditory spatial information, by contrast, has to be computed from binaural time and intensity differences as well as from monaural spectral cues produced by the head and ears. Evaluation of these cues in the central nervous system leads to the generation of neurons that are sensitive to the location of a sound source in space ("spatial tuning") and, in some animal species, to auditory space maps where spatial location is encoded as a 2-D map just like in the visual system. The brain structures thought to be involved in the multimodal integration of visual and auditory spatial integration are the superior colliculus in the midbrain and the inferior parietal lobe in the cerebral cortex. It has been suggested for the owl that the visual system participates in setting up the auditory space map in the superior.


High Performance Neural Net Simulation on a Multiprocessor System with "Intelligent" Communication

Neural Information Processing Systems

The performance requirements in experimental research on artificial neural nets often exceed the capability of workstations and PCs by a great amount. But speed is not the only requirement. Flexibility and implementation time for new algorithms are usually of equal importance. This paper describes the simulation of neural nets on the MUSIC parallel supercomputer, a system that shows a good balance between the three issues and therefore made many research projects possible that were unthinkable before. The system should be flexible, simple to program and the realization time should be short enough to not have an obsolete system by the time it is finished. Therefore, the fastest available standard components were used.