Not enough data to create a plot.
Try a different view from the menu above.
Obermayer, Klaus
Feature Selection and Classification on Matrix Data: From Large Margins to Small Covering Numbers
Hochreiter, Sepp, Obermayer, Klaus
We investigate the problem of learning a classification task for datasets which are described by matrices. Rows and columns of these matrices correspond to objects, where row and column objects may belong to different sets, and the entries in the matrix express the relationships between them. We interpret the matrix elements as being produced by an unknown kernel which operates on object pairs and we show that - under mild assumptions - these kernels correspond to dot products in some (unknown) feature space. Minimizing a bound for the generalization error of a linear classifier which has been obtained using covering numbers we derive an objective function for model selection according to the principle of structural risk minimization. The new objective function has the advantage that it allows the analysis of matrices which are not positive definite, and not even symmetric or square.
Feature Selection and Classification on Matrix Data: From Large Margins to Small Covering Numbers
Hochreiter, Sepp, Obermayer, Klaus
We investigate the problem of learning a classification task for datasets which are described by matrices. Rows and columns of these matrices correspond to objects, where row and column objects may belong to different sets, and the entries in the matrix express the relationships between them. We interpret the matrix elements as being produced by an unknown kernel which operates on object pairs and we show that - under mild assumptions - these kernels correspond to dot products in some (unknown) feature space. Minimizing a bound for the generalization error of a linear classifier which has been obtained using covering numbers we derive an objective function for model selection according to the principle of structural risk minimization. The new objective function has the advantage that it allows the analysis of matrices which are not positive definite, and not even symmetric or square.
Multi Dimensional ICA to Separate Correlated Sources
Vollgraf, Roland, Obermayer, Klaus
There are two linear transformations to be considered, one operating inside the channels (0) and one operating between the different channels (W). The two transformations are estimated in two adjacent leA steps. There are mainly two advantages, that can be taken from the first transformation: (i) By arranging independence among the columns of the transformed patches, the average transinformation between different channels is decreased.
Multi Dimensional ICA to Separate Correlated Sources
Vollgraf, Roland, Obermayer, Klaus
There are two linear transformations to be considered, one operating inside the channels (0) and one operating between the different channels (W). The two transformations are estimated in two adjacent leA steps. There are mainly two advantages, that can be taken from the first transformation: (i) By arranging independence among the columns of the transformed patches, the average transinformation between different channels is decreased.
Activity Driven Adaptive Stochastic Resonance
Wenning, Gregor, Obermayer, Klaus
Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process.
Multi Dimensional ICA to Separate Correlated Sources
Vollgraf, Roland, Obermayer, Klaus
There are two linear transformations to be considered, one operating inside thechannels (0) and one operating between the different channels (W). The two transformations are estimated in two adjacent leA steps. There are mainly two advantages, that can be taken from the first transformation: (i) By arranging independence among the columns of the transformed patches, the average transinformation betweendifferent channels is decreased.
Activity Driven Adaptive Stochastic Resonance
Wenning, Gregor, Obermayer, Klaus
Cortical neurons might be considered as threshold elements integrating inparallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process.
Application of Blind Separation of Sources to Optical Recording of Brain Activity
Schoner, Holger, Stetter, Martin, Schießl, Ingo, Mayhew, John E. W., Lund, Jennifer S., McLoughlin, Niall, Obermayer, Klaus
In the analysis of data recorded by optical imaging from intrinsic signals of changes of light reflectance from cortical tissue) the removal(measurement of noise and artifacts such as blood vessel patterns is a serious problem. Often bandpass filtering is used, but the underlying assumption that a spatial frequency exists, which separates the mapping component from other components (especially the global signal), is questionable. Here we propose alternative ways of processing optical imaging data, using blind source separation techniques based on the spatial decorre1ation of the data. We first perform benchmarks on artificial data in order to select the way of processing, which is most robust with respect to sensor noise. We then apply it to recordings of optical imaging experiments BSS technique isfrom macaque primary visual cortex. We show that our able to extract ocular dominance and orientation preference maps from single condition stacks, for data, where standard post-processing procedures fail. Artifacts, especially blood vessel patterns, can often be completely removed from the maps. In summary, our method for blind source separation using extended spatial decorrelation is a superior technique for the analysis of optical recording data.
Bayesian Transduction
Graepel, Thore, Herbrich, Ralf, Obermayer, Klaus
Transduction is an inference principle that takes a training sample andaims at estimating the values of a function at given points contained in the so-called working sample as opposed to the whole of input space for induction. Transduction provides a confidence measure on single predictions rather than classifiers - a feature particularly important for risk-sensitive applications. The possibly infinite number of functions is reduced to a finite number of equivalence classeson the working sample. A rigorous Bayesian analysis reveals that for standard classification loss we cannot benefit from considering more than one test point at a time. The probability of the label of a given test point is determined as the posterior measure of the corresponding subset of hypothesis space.
Application of Blind Separation of Sources to Optical Recording of Brain Activity
Schoner, Holger, Stetter, Martin, Schießl, Ingo, Mayhew, John E. W., Lund, Jennifer S., McLoughlin, Niall, Obermayer, Klaus
In the analysis of data recorded by optical imaging from intrinsic signals (measurement of changes of light reflectance from cortical tissue) the removal of noise and artifacts such as blood vessel patterns is a serious problem. Often bandpass filtering is used, but the underlying assumption that a spatial frequency exists, which separates the mapping component from other components (especially the global signal), is questionable. Here we propose alternative ways of processing optical imaging data, using blind source separation techniques based on the spatial decorre1ation of the data. We first perform benchmarks on artificial data in order to select the way of processing, which is most robust with respect to sensor noise. We then apply it to recordings of optical imaging experiments from macaque primary visual cortex. We show that our BSS technique is able to extract ocular dominance and orientation preference maps from single condition stacks, for data, where standard post-processing procedures fail. Artifacts, especially blood vessel patterns, can often be completely removed from the maps. In summary, our method for blind source separation using extended spatial decorrelation is a superior technique for the analysis of optical recording data.