Goto

Collaborating Authors

 Aeron, Shuchin


Conditional entropy minimization principle for learning domain invariant representation features

arXiv.org Artificial Intelligence

Invariance principle-based methods, for example, Invariant Risk Minimization (IRM), have recently emerged as promising approaches for Domain Generalization (DG). Despite the promising theory, invariance principle-based approaches fail in common classification tasks due to the mixture of the true invariant features and the spurious invariant features. In this paper, we propose a framework based on the conditional entropy minimization principle to filter out the spurious invariant features leading to a new algorithm with a better generalization capability. We theoretically prove that under some particular assumptions, the representation function can precisely recover the true invariant features. In addition, we also show that the proposed approach is closely related to the well-known Information Bottleneck framework. Both the theoretical and numerical results are provided to justify our approach.


Robust and efficient change point detection using novel multivariate rank-energy GoF test

arXiv.org Machine Learning

In this paper, we use and further develop upon a recently proposed multivariate, distribution-free Goodness-of-Fit (GoF) test based on the theory of Optimal Transport (OT) called the Rank Energy (RE) [1], for non-parametric and unsupervised Change Point Detection (CPD) in multivariate time series data. We show that directly using RE leads to high sensitivity to very small changes in distributions (causing high false alarms) and it requires large sample complexity and huge computational cost. To alleviate these drawbacks, we propose a new GoF test statistic called as soft-Rank Energy (sRE) that is based on entropy regularized OT and employ it towards CPD. We discuss the advantages of using sRE over RE and demonstrate that the proposed sRE based CPD outperforms all the existing methods in terms of Area Under the Curve (AUC) and F1-score on real and synthetic data sets.


Learning generative models for valid knockoffs using novel multivariate-rank based statistics

arXiv.org Machine Learning

We consider the problem of generating valid knockoffs for knockoff filtering which is a statistical method that provides provable false discovery rate guarantees for any model selection procedure. To this end, we are motivated by recent advances in multivariate distribution-free goodness-of-fit tests namely, the rank energy (RE), that is derived using theoretical results characterizing the optimal maps in the Monge's Optimal Transport (OT) problem. However, direct use of use RE for learning generative models is not feasible because of its high computational and sample complexity, saturation under large support discrepancy between distributions, and non-differentiability in generative parameters. To alleviate these, we begin by proposing a variant of the RE, dubbed as soft rank energy (sRE), and its kernel variant called as soft rank maximum mean discrepancy (sRMMD) using entropic regularization of Monge's OT problem. We then use sRMMD to generate deep knockoffs and show via extensive evaluation that it is a novel and effective method to produce valid knockoffs, achieving comparable, or in some cases improved tradeoffs between detection power Vs false discoveries.


Dynamical Wasserstein Barycenters for Time-series Modeling

arXiv.org Machine Learning

Many time series can be modeled as a sequence of segments representing high-level discrete states, such as running and walking in a human activity application. Flexible models should describe the system state and observations in stationary ``pure-state'' periods as well as transition periods between adjacent segments, such as a gradual slowdown between running and walking. However, most prior work assumes instantaneous transitions between pure discrete states. We propose a dynamical Wasserstein barycentric (DWB) model that estimates the system state over time as well as the data-generating distributions of pure states in an unsupervised manner. Our model assumes each pure state generates data from a multivariate normal distribution, and characterizes transitions between states via displacement-interpolation specified by the Wasserstein barycenter. The system state is represented by a barycentric weight vector which evolves over time via a random walk on the simplex. Parameter learning leverages the natural Riemannian geometry of Gaussian distributions under the Wasserstein distance, which leads to improved convergence speeds. Experiments on several human activity datasets show that our proposed DWB model accurately learns the generating distribution of pure states while improving state estimation for transition periods compared to the commonly used linear interpolation mixture models.


Barycenteric distribution alignment and manifold-restricted invertibility for domain generalization

arXiv.org Machine Learning

For the Domain Generalization (DG) problem where the hypotheses are composed of a common representation function followed by a labeling function, we point out a shortcoming in existing approaches that fail to explicitly optimize for a term, appearing in a well-known and widely adopted upper bound to the risk on the unseen domain, that is dependent on the representation to be learned. To this end, we first derive a novel upper bound to the prediction risk. We show that imposing a mild assumption on the representation to be learned, namely manifold restricted invertibility, is sufficient to deal with this issue. Further, unlike existing approaches, our novel upper bound doesn't require the assumption of Lipschitzness of the loss function. In addition, the distributional discrepancy in the representation space is handled via the Wasserstein-2 barycenter cost. In this context, we creatively leverage old and recent transport inequalities, which link various optimal transport metrics, in particular the $L^1$ distance (also known as the total variation distance) and the Wasserstein-2 distances, with the Kullback-Liebler divergence. These analyses and insights motivate a new representation learning cost for DG that additively balances three competing objectives: 1) minimizing classification error across seen domains via cross-entropy, 2) enforcing domain-invariance in the representation space via the Wasserstein-2 barycenter cost, and 3) promoting non-degenerate, nearly-invertible representation via one of two mechanisms, viz., an autoencoder-based reconstruction loss or a mutual information loss. It is to be noted that the proposed algorithms completely bypass the use of any adversarial training mechanism that is typical of many current domain generalization approaches. Simulation results on several standard datasets demonstrate superior performance compared to several well-known DG algorithms.


Soft and subspace robust multivariate rank tests based on entropy regularized optimal transport

arXiv.org Machine Learning

In this paper, we extend the recently proposed multivariate rank energy distance, based on the theory of optimal transport, for statistical testing of distributional similarity, to soft rank energy distance. Being differentiable, this in turn allows us to extend the rank energy to a subspace robust rank energy distance, dubbed Projected soft-Rank Energy distance, which can be computed via optimization over the Stiefel manifold. We show via experiments that using projected soft rank energy one can trade-off the detection power vs the false alarm via projections onto an appropriately selected low dimensional subspace. We also show the utility of the proposed tests on unsupervised change point detection in multivariate time series data. All codes are publicly available at the link provided in the experiment section.


Domain Adaptation for Robust Workload Level Alignment Between Sessions and Subjects using fNIRS

arXiv.org Artificial Intelligence

Significance: We demonstrated the potential of using domain adaptation on functional Near-Infrared Spectroscopy (fNIRS) data to classify different levels of n-back tasks that involve working memory. Aim: Domain shift in fNIRS data is a challenge in the workload level alignment across different experiment sessions and subjects. In order to address this problem, two domain adaptation approaches -- Gromov-Wasserstein (G-W) and Fused Gromov-Wasserstein (FG-W) were used. Approach: Specifically, we used labeled data from one session or one subject to classify trials in another session (within the same subject) or another subject. We applied G-W for session-by-session alignment and FG-W for subject-by-subject alignment to fNIRS data acquired during different n-back task levels. We compared these approaches with three supervised methods: multi-class Support Vector Machine (SVM), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN). Results: In a sample of six subjects, G-W resulted in an alignment accuracy of 68 $\pm$ 4 % (weighted mean $\pm$ standard error) for session-by-session alignment, FG-W resulted in an alignment accuracy of 55 $\pm$ 2 % for subject-by-subject alignment. In each of these cases, 25 % accuracy represents chance. Alignment accuracy results from both G-W and FG-W are significantly greater than those from SVM, CNN and RNN. We also showed that removal of motion artifacts from the fNIRS data plays an important role in improving alignment performance. Conclusions: Domain adaptation has potential for session-by-session and subject-by-subject alignment of mental workload by using fNIRS data.


On Matched Filtering for Statistical Change Point Detection

arXiv.org Machine Learning

Non-parametric and distribution-free two-sample tests have been the foundation of many change point detection algorithms. However, randomness in the test statistic as a function of time makes them susceptible to false positives and localization ambiguity. We address these issues by deriving and applying filters matched to the expected temporal signatures of a change for various sliding window, two-sample tests under IID assumptions on the data. These filters are derived asymptotically with respect to the window size for the Wasserstein quantile test, the Wasserstein-1 distance test, Maximum Mean Discrepancy squared (MMD^2), and the Kolmogorov-Smirnov (KS) test. The matched filters are shown to have two important properties. First, they are distribution-free, and thus can be applied without prior knowledge of the underlying data distributions. Second, they are peak-preserving, which allows the filtered signal produced by our methods to maintain expected statistical significance. Through experiments on synthetic data as well as activity recognition benchmarks, we demonstrate the utility of this approach for mitigating false positives and improving the test precision. Our method allows for the localization of change points without the use of ad-hoc post-processing to remove redundant detections common to current methods. We further highlight the performance of statistical tests based on the Quantile-Quantile (Q-Q) function and show how the invariance property of the Q-Q function to order-preserving transformations allows these tests to detect change points of different scales with a single threshold within the same dataset.


Unlabeled sensing with local permutations

arXiv.org Machine Learning

Unlabeled sensing is a linear inverse problem where the measurements are scrambled under an unknown permutation leading to loss of correspondence between the measurements and the rows of the sensing matrix. Motivated by practical tasks such as mobile sensor networks, target tracking and the pose and correspondence estimation between point clouds, we study a special case of this problem restricting the class of permutations to be local and allowing for multiple views. In this setting, namely unlabeled multi-view sensing with local permutation, previous results and algorithms are not directly applicable. In this paper, we propose a computationally efficient algorithm that creatively exploits the machinery of graph alignment and Gromov-Wasserstein alignment and leverages the multiple views to estimate the local permutations. Simulation results on synthetic data sets indicate that the proposed algorithm is scalable and applicable to the challenging regimes of low to moderate SNR.


Robust Machine Learning via Privacy/Rate-Distortion Theory

arXiv.org Machine Learning

Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust classifier and an adversarial perturbation can be found via the solution of a maximum conditional entropy problem. This information-theoretic perspective sheds light on the fundamental tradeoff between robustness and clean data performance, which ultimately arises from the geometric structure of the underlying data distribution and perturbation constraints. Further, we show that under mild conditions, the worst case adversarial distribution with Wasserstein-ball constraints on the perturbation has a fixed point characterization. This is obtained via the first order necessary conditions for optimality of the derived maximum conditional entropy problem. This fixed point characterization exposes the interplay between the geometry of the ground cost in the Wasserstein-ball constraint, the worst-case adversarial distribution, and the given reference data distribution.