Goto

Collaborating Authors

 Obermayer, Klaus


Bayesian Transduction

Neural Information Processing Systems

Transduction is an inference principle that takes a training sample and aims at estimating the values of a function at given points contained in the so-called working sample as opposed to the whole of input space for induction. Transduction provides a confidence measure on single predictions rather than classifiers - a feature particularly important for risk-sensitive applications. The possibly infinite number of functions is reduced to a finite number of equivalence classes on the working sample. A rigorous Bayesian analysis reveals that for standard classification loss we cannot benefit from considering more than one test point at a time. The probability of the label of a given test point is determined as the posterior measure of the corresponding subset of hypothesis space. We consider the PAC setting of binary classification by linear discriminant functions (perceptrons) in kernel space such that the probability of labels is determined by the volume ratio in version space. We suggest to sample this region by an ergodic billiard. Experimental results on real world data indicate that Bayesian Transduction compares favourably to the well-known Support Vector Machine, in particular if the posterior probability of labellings is used as a confidence measure to exclude test points of low confidence.


Contrast Adaptation in Simple Cells by Changing the Transmitter Release Probability

Neural Information Processing Systems

Using a recurrent neural network of excitatory spiking neurons with adapting synapses we show that both effects could be explained by a fast and a slow component in the synaptic adaptation.


Classification on Pairwise Proximity Data

Neural Information Processing Systems

We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an extension of the Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics W.r.t.


The Role of Lateral Cortical Competition in Ocular Dominance Development

Neural Information Processing Systems

Lateral competition within a layer of neurons sharpens and localizes the response to an input stimulus. Here, we investigate a model for the activity dependentdevelopment of ocular dominance maps which allows to vary the degree of lateral competition. For weak competition, it resembles acorrelation-based learning model and for strong competition, it becomes a self-organizing map. Thus, in the regime of weak competition thereceptive fields are shaped by the second order statistics of the input patterns, whereas in the regime of strong competition, the higher moments and "features" of the individual patterns become important. When correlated localized stimuli from two eyes drive the cortical development wefind (i) that a topographic map and binocular, localized receptive fields emerge when the degree of competition exceeds a critical value and (ii) that receptive fields exhibit eye dominance beyond a second criticalvalue. For anti-correlated activity between the eyes, the second orderstatistics drive the system to develop ocular dominance even for weak competition, but no topography emerges. Topography is established onlybeyond a critical degree of competition.


Contrast Adaptation in Simple Cells by Changing the Transmitter Release Probability

Neural Information Processing Systems

Using a recurrent neural network of excitatory spiking neurons with adapting synapses we show that both effects could be explained by a fast and a slow component in the synaptic adaptation.


The Role of Lateral Cortical Competition in Ocular Dominance Development

Neural Information Processing Systems

Lateral competition within a layer of neurons sharpens and localizes the response to an input stimulus. Here, we investigate a model for the activity dependent development of ocular dominance maps which allows to vary the degree of lateral competition. For weak competition, it resembles a correlation-based learning model and for strong competition, it becomes a self-organizing map. Thus, in the regime of weak competition the receptive fields are shaped by the second order statistics of the input patterns, whereas in the regime of strong competition, the higher moments and "features" of the individual patterns become important. When correlated localized stimuli from two eyes drive the cortical development we find (i) that a topographic map and binocular, localized receptive fields emerge when the degree of competition exceeds a critical value and (ii) that receptive fields exhibit eye dominance beyond a second critical value. For anti-correlated activity between the eyes, the second order statistics drive the system to develop ocular dominance even for weak competition, but no topography emerges. Topography is established only beyond a critical degree of competition.


Classification on Pairwise Proximity Data

Neural Information Processing Systems

We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representation doesnot refer to an explicit feature representation of the data items and is thus more general than the standard approach of using Euclideanfeature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an extension ofthe Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics W.r.t.


Contrast Adaptation in Simple Cells by Changing the Transmitter Release Probability

Neural Information Processing Systems

Using a recurrent neural network of excitatory spiking neurons with adapting synapses we show that both effects could be explained by a fast and a slow component inthe synaptic adaptation.


An Annealed Self-Organizing Map for Source Channel Coding

Neural Information Processing Systems

It is especially suited for speech and image data which in many applieations have to be transmitted under low bandwidth/high noise level conditions. Following the idea of (Farvardin, 1990) and (Luttrell, 1989) of jointly optimizing the codebook and the data representation w.r.t. to a given channel noise we apply a deterministic annealing scheme (Rose, 1990; Buhmann, 1997) to the problem and develop a An Annealed Self-Organizing Map for Source Channel Coding 431 soft topographic vector quantization algorithm (STVQ) (cf.


An Annealed Self-Organizing Map for Source Channel Coding

Neural Information Processing Systems

It is especially suited for speech and image data which in many applieations have to be transmitted under low bandwidth/high noise level conditions. Followingthe idea of (Farvardin, 1990) and (Luttrell, 1989) of jointly optimizing the codebook and the data representation w.r.t. to a given channel noise we apply a deterministic annealingscheme (Rose, 1990; Buhmann, 1997) to the problem and develop a An Annealed Self-Organizing Map for Source Channel Coding 431 soft topographic vector quantization algorithm (STVQ) (cf.