Plotting

 Tishby, Naftali


Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

Neural Information Processing Systems

The way groups of auditory neurons interact to code acoustic information isinvestigated using an information theoretic approach. We develop measures of redundancy among groups of neurons, and apply them to the study of collaborative coding efficiency in two processing stations in the auditory pathway: the inferior colliculus (IC) and the primary auditory cortex (AI). Under two schemes for the coding of the acoustic content, acoustic segments coding and stimulus identity coding, we show differences both in information content and group redundancies between IC and AI neurons. These results provide for the first time a direct evidence for redundancy reduction along the ascending auditory pathway, as has been hypothesized fortheoretical considerations [Barlow 1959,2001]. The redundancy effects under the single-spikes coding scheme are significant onlyfor groups larger than ten cells, and cannot be revealed with the redundancy measures that use only pairs of cells. The results suggest that the auditory system transforms low level representations thatcontain redundancies due to the statistical structure of natural stimuli, into a representation in which cortical neurons extractrare and independent component of complex acoustic signals, that are useful for auditory scene analysis.


Temporally Dependent Plasticity: An Information Theoretic Account

Neural Information Processing Systems

These spikes can be temporally weighted in many ways: from the computational point of view it is beneficial to weigh spikes uniformly along time, but this may require long "memory" and is biologically improbable.


Universality and Individuality in a Neural Code

Neural Information Processing Systems

This basic question in the theory of knowledge seems to be beyond the scope of experimental investigation. An accessible version of this question is whether different observers of the same sense data have the same neural representation of these data: how much of the neural code is universal, and how much is individual? Differences in the neural codes of different individuals may arise from various sources: First, different individuals may use different'vocabularies' of coding symbols. Second, they may use the same symbols to encode different stimulus features.


Temporally Dependent Plasticity: An Information Theoretic Account

Neural Information Processing Systems

It should be stressed that in our model information is coded in the non-stationary rates that underlie the input spike trains. As these rates are not observable, any learning must depends on the observable input spikes that realize those underlying rates.


Universality and Individuality in a Neural Code

Neural Information Processing Systems

This basic question in the theory of knowledge seems to be beyond the scope of experimental investigation. An accessible version of this question is whether different observers of the same sense data have the same neural representation of these data: how much of the neural code is universal, and how much is individual? Differences in the neural codes of different individuals may arise from various sources: First, different individuals may use different'vocabularies' of coding symbols. Second, they may use the same symbols to encode different stimulus features. Third, they may have different latencies, so they'say' the same things at slightly different times.


Data Clustering by Markovian Relaxation and the Information Bottleneck Method

Neural Information Processing Systems

We introduce a new, nonparametric and principled, distance based clustering method. This method combines a pairwise based approach with a vector-quantization method which provide a meaningful interpretation to the resulting clusters. The idea is based on turning the distance matrix into a Markov process and then examine the decay of mutual-information during the relaxation of this process. The clusters emerge as quasi-stable structures during this relaxation, and then are extracted using the information bottleneck method.


Data Clustering by Markovian Relaxation and the Information Bottleneck Method

Neural Information Processing Systems

We introduce a new, nonparametric and principled, distance based clustering method. This method combines a pairwise based approach witha vector-quantization method which provide a meaningful interpretation to the resulting clusters. The idea is based on turning the distance matrix into a Markov process and then examine the decay of mutual-information during the relaxation of this process. The clusters emerge as quasi-stable structures during thisrelaxation, and then are extracted using the information bottleneck method.


Information Capacity and Robustness of Stochastic Neuron Models

Neural Information Processing Systems

The reliability and accuracy of spike trains have been shown to depend on the nature of the stimulus that the neuron encodes.


Agglomerative Information Bottleneck

Neural Information Processing Systems

This question was recently shown in [9] to be a special case of a much more fundamental problem: What are the features of the variable X that are relevant for the prediction of another, relevance, variable Y?


Agglomerative Information Bottleneck

Neural Information Processing Systems

This question was recently shown in [9] to be a special case of a much more fundamental problem:What are the features of the variable X that are relevant for the prediction of another, relevance, variable Y?