Goto

Collaborating Authors

 original review


Review for NeurIPS paper: Stationary Activations for Uncertainty Calibration in Deep Learning

Neural Information Processing Systems

The overall motivation is not clear. It is true that Matern kernels are good at capturing sharp transitions, as shown in Fig 1. There are many other methods to achieve similar, if not better, results. For instance, we can learn kernels [1,2], use deep kernels [2], use spectral mixture kernels [3], use "neural-network kernels" [4], etc. Comparisons with [1]-[5] would provide further insights. Please also report MSE and AUC in addition to the accuracy.


Review for NeurIPS paper: One-bit Supervision for Image Classification

Neural Information Processing Systems

Additional Feedback: I consider this work as a new method in the context of semi-supervised learning and actively learning. Indeed, these are the two topics the authors of the paper reviewed as the related work to this work. The method essentially is yet another way to rearrange labeled samples and unlabeled samples in order to identify "active" samples to improve the learning accuracy. Thus, it is not an eye-opening, truly novel approach. I would argue that this method is incrementally novel at best.


Can Large Language Models (or Humans) Disentangle Text?

de Pieuchon, Nicolas Audinet, Daoud, Adel, Jerzak, Connor Thomas, Johansson, Moa, Johansson, Richard

arXiv.org Artificial Intelligence

We investigate the potential of large language models (LLMs) to disentangle text variables--to remove the textual traces of an undesired forbidden variable in a task sometimes known as text distillation and closely related to the fairness in AI and causal inference literature. We employ a range of various LLM approaches in an attempt to disentangle text by identifying and removing information about a target variable while preserving other relevant signals. We show that in the strong test of removing sentiment, the statistical association between the processed text and sentiment is still detectable to machine learning classifiers post-LLM-disentanglement. Furthermore, we find that human annotators also struggle to disentangle sentiment while preserving other semantic content. This suggests there may be limited separability between concept variables in some text contexts, highlighting limitations of methods relying on text-level transformations and also raising questions about the robustness of disentanglement methods that achieve statistical independence in representation space.


Learning the Difference that Makes a Difference with Counterfactually-Augmented Data

Kaushik, Divyansh, Hovy, Eduard, Lipton, Zachary C.

arXiv.org Artificial Intelligence

Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks. However, the language of causality offers clarity: spurious associations are those due to a common cause (confounding) vs direct or indirect effects. In this paper, we focus on NLP, introducing methods and resources for training models insensitive to spurious patterns. Given documents and their initial labels, we task humans with revise each document to accord with a counterfactual target label, asking that the revised documents be internally coherent while avoiding any gratuitous changes. Interestingly, on sentiment analysis and natural language inference tasks, classifiers trained on original data fail on their counterfactually-revised counterparts and vice versa. Classifiers trained on combined datasets perform remarkably well, just shy of those specialized to either domain. While classifiers trained on either original or manipulated data alone are sensitive to spurious features (e.g., mentions of genre), models trained on the combined data are insensitive to this signal. We will publicly release both datasets.


Best Video Games (2018): *God of War*, *Spider-Man*, and More

WIRED

Cry it out from the rooftops: we survived 2018. And in this long, complicated year, a few games stuck out as the best, the most interesting, the most surprising, of the year. Whether you're catching up over the holidays or just looking for fuel to argue with your friends, here are our picks for the best videogames released in 2018. Games are a vast and varied field, friends; so are opinions. Monster Hunter has, for a certain variety of player, been a big deal for years.