Natschläger, Thomas
Robust Unsupervised Domain Adaptation for Neural Networks via Moment Alignment
Zellinger, Werner, Moser, Bernhard A., Grubinger, Thomas, Lughofer, Edwin, Natschläger, Thomas, Saminger-Platz, Susanne
A novel approach for unsupervised domain adaptation for neural networks is proposed that relies on a metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature representations by means of maximizing the similarity between domain-specific activation distributions. The proposed metric results from modifying an integral probability metric in a way such that it becomes translation-invariant on a polynomial reproducing kernel Hilbert space. The metric has an intuitive interpretation in the dual space as sum of differences of central moments of the corresponding activation distributions. As demonstrated by an analysis on standard benchmark datasets for sentiment analysis and object recognition the outlined approach shows more robustness \wrt parameter changes than state-of-the-art approaches while achieving even higher classification accuracies.
Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning
Zellinger, Werner, Grubinger, Thomas, Lughofer, Edwin, Natschläger, Thomas, Saminger-Platz, Susanne
The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w.r.t. parameter changes in a certain interval. The source code of the experiments is publicly available.
At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks
Bertschinger, Nils, Natschläger, Thomas, Legenstein, Robert A.
In this paper we analyze the relationship between the computational capabilities ofrandomly connected networks of threshold gates in the timeseries domain and their dynamical properties. In particular we propose a complexity measure which we find to assume its highest values near the edge of chaos, i.e. the transition from ordered to chaotic dynamics. Furthermore we show that the proposed complexity measure predicts the computational capabilities very well: only near the edge of chaos are such networks able to perform complex computations on time series. Additionally asimple synaptic scaling rule for self-organized criticality is presented and analyzed.
Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
Natschläger, Thomas, Maass, Wolfgang
We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information.
Information Dynamics and Emergent Computation in Recurrent Circuits of Spiking Neurons
Natschläger, Thomas, Maass, Wolfgang
We employ an efficient method using Bayesian and linear classifiers for analyzing the dynamics of information in high-dimensional states of generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid computations on complex spike patterns, merging information contained in the order of spike arrival with previously acquired context information.
A Model for Real-Time Computation in Generic Neural Microcircuits
Maass, Wolfgang, Natschläger, Thomas, Markram, Henry
A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.
A Model for Real-Time Computation in Generic Neural Microcircuits
Maass, Wolfgang, Natschläger, Thomas, Markram, Henry
A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry.
Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Natschläger, Thomas, Maass, Wolfgang, Sontag, Eduardo D., Zador, Anthony M.
Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their "weight" changes on a short time scale by several hundred percent in dependence of the past input to the synapse. In this article we explore the consequences that these synaptic dynamics entail for the computational power of feedforward neural networks. We show that gradient descent suffices to approximate a given (quadratic) filter by a rather small neural system with dynamic synapses. We also compare our network model to artificial neural networks designed for time series processing. Our numerical results are complemented by theoretical analysis which show that even with just a single hidden layer such networks can approximate a surprisingly large large class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics.
Finding the Key to a Synapse
Natschläger, Thomas, Maass, Wolfgang
Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature. 1 Introduction A large number of experimental studies have shown that biological synapses have an inherent dynamics, which controls how the pattern of amplitudes of postsynaptic responses depends on the temporal pattern of the incoming spike train. Various quantitative models have been proposed involving a small number of characteristic parameters, that allow us to predict the response of a given synapse to a given spike train once proper values for these characteristic synaptic parameters have been found. The analysis of this article is based on the model of [1], where three parameters U, F, D control the dynamics of a synapse and a fourth parameter A - which corresponds to the synaptic "weight" in static synapse models - scales the absolute sizes of the postsynaptic responses. The resulting model predicts the amplitude Ak for the kth spike in a spike train with interspike intervals (lSI's) .60