Goto

Collaborating Authors

 input data





DeepDiffusion-Invariant WassersteinDistributionalClassification

Neural Information Processing Systems

How can the stochastic properties of input data and labels be appropriately captured to handle severe perturbations? To answer this question, we represent both input data and target labels as probability measures (i.e., probability densities), denoted asµn and ˆνn, respectively, in the Wasserstein space and solve a distance-based classification problem (i.e.,



8710ef761bbb29a6f9d12e4ef8e4379c-Paper.pdf

Neural Information Processing Systems

In machine learning, these have a know-it-when-you-see-it character; e.g., changing the gender of a sentence's subject changes a sentiment predictor's output. To check for spurious correlations, we can'stress test' models by perturbing irrelevant parts of input data and seeing if model predictions change. In this paper, we study stress testing using the tools of causal inference. We introduce counterfactual invariance as a formalization of the requirement that changing irrelevant parts of the input shouldn'tchangemodelpredictions.





Self-InterpretableModelwithTransformation EquivariantInterpretation

Neural Information Processing Systems

Withthe proliferation ofmachine learning applications inthe real world, the demand for explaining machine learning predictions continues to grow especially in high-stakes fields. Recent studies havefound that interpretation methods can be sensitive and unreliable, where the interpretations can be disturbed by perturbations or transformations of input data. To address this issue, we propose to learn robust interpretations through transformation equivariant regularization in a self-interpretable model.