Goto

Collaborating Authors

 shortcut





I'm a neurologist... here are three simple tricks to help you kick any bad habit

Daily Mail - Science & tech

Tom Homan pushes Border Patrol out of Minneapolis in sweeping shake-up as Trump's'little Napoleon' Greg Bovino faces humiliating exit Insiders reveal the REAL misstep that got Kristi Noem humiliatingly sidelined by Trump... and the weak excuse she's peddling to try and save her own skin Insidious secret life of promiscuous neurosurgeon found dead in his $2.5m mansion'He has no loyalty': The bitter secret fallout between One Direction star Harry Styles and his former bandmates - as insiders reveal for the first time what really happened at Liam Payne's funeral The $1 supplement that will protect you from winter viruses... including new'super flu' Is Angelina Jolie quitting America? Private struggles emerge... as actress weighs major lifestyle that threatens to rupture her family Young single mother's selfless final act after finding out she had just weeks to live Seven dead in private jet crash as audio reveals voice said'Let there be light' seconds before tragedy at snowy Maine airport Gisele Bundchen relaxes with new husband and baby on boat after ex Tom Brady admits divorce'took a lot out of me' Defiant Trump dismisses Alzheimer's fears as he struggles to recall name of disease in interview America's best and worst states to retire revealed - and why Florida is no longer the obvious winner I'm a neurologist... here are three simple tricks to help you kick any bad habit Furious family hit out at Kate Hudson's'abominable' Oscar nomination amid toxic feud NFL's'scripted' conspiracy theory resurfaces as fans find five-month old post hinting at Super Bowl 60 matchup I'm a neurologist... here are three simple tricks to help you kick any bad habit Some bad habits are small. But over time, they add up, and suddenly you're wondering how you ended up here. Now, a neurologist says three simple tricks can help break the cycles that quietly take over our lives. Dr Arif Khan, a pediatric neurologist, outlined'cue shift,' the'one-step rule' and'reward rewrite' as practical tools to stop negative patterns in their tracks.


What Can We Learn from Unlearnable Datasets?

Neural Information Processing Systems

In an era of widespread web scraping, unlearnable dataset methods have the potential to protect data privacy by preventing deep neural networks from generalizing. But in addition to a number of practical limitations that make their use unlikely, we make a number of findings that call into question their ability to safeguard data. First, it is widely believed that neural networks trained on unlearnable datasets only learn shortcuts, simpler rules that are not useful for generalization. In contrast, we find that networks actually can learn useful features that can be reweighed for high test performance, suggesting that image protection is not assured. Unlearnable datasets are also believed to induce learning shortcuts through linear separability of added perturbations. We provide a counterexample, demonstrating that linear separability of perturbations is not a necessary condition. To emphasize why linearly separable perturbations should not be relied upon, we propose an orthogonal projection attack which allows learning from unlearnable datasets published in ICML 2021 and ICLR 2023. Our proposed attack is significantly less complex than recently proposed techniques.


Don't blame Dataset Shift! Shortcut Learning due to Gradients and Cross Entropy

Neural Information Processing Systems

Common explanations for shortcut learning assume that the shortcut improves prediction only under the training distribution. Thus, models trained in the typical way by minimizing log-loss using gradient descent, which we call default-ERM, should utilize the shortcut. However, even when the stable feature determines the label in the training distribution and the shortcut does not provide any additional information, like in perception tasks, default-ERM exhibits shortcut learning. Why are such solutions preferred when the loss can be driven to zero when using the stable feature alone? By studying a linear perception task, we show that default-ERM's preference for maximizing the margin, even without overparameterization, leads to models that depend more on the shortcut than the stable feature. This insight suggests that default-ERM's implicit inductive bias towards max-margin may be unsuitable for perception tasks. Instead, we consider inductive biases toward uniform margins. We show that uniform margins guarantee sole dependence on the perfect stable feature in the linear perception task and suggest alternative loss functions, termed margin control (MARG-CTRL), that encourage uniform-margin solutions. MARG-CTRL techniques mitigate shortcut learning on a variety of vision and language tasks, showing that changing inductive biases can remove the need for complicated shortcut-mitigating methods in perception tasks.


Learning Concept Credible Models for Mitigating Shortcuts

Neural Information Processing Systems

During training, models can exploit spurious correlations as shortcuts, resulting in poor generalization performance when shortcuts do not persist. In this work, assuming access to a representation based on domain knowledge (i.e., known concepts) that is invariant to shortcuts, we aim to learn robust and accurate models from biased training data. In contrast to previous work, we do not rely solely on known concepts, but allow the model to also learn unknown concepts. We propose two approaches for mitigating shortcuts that incorporate domain knowledge, while accounting for potentially important yet unknown concepts. The first approach is two-staged. After fitting a model using known concepts, it accounts for the residual using unknown concepts. While flexible, we show that this approach is vulnerable when shortcuts are correlated with the unknown concepts. This limitation is addressed by our second approach that extends a recently proposed regularization penalty. Applied to two real-world datasets, we demonstrate that both approaches can successfully mitigate shortcut learning.


Roadblocks for Temporarily Disabling Shortcuts and Learning New Knowledge

Neural Information Processing Systems

Deep learning models have been found with a tendency of relying on shortcuts, i.e., decision rules that perform well on standard benchmarks but fail when transferred to more challenging testing conditions. Such reliance may hinder deep learning models from learning other task-related features and seriously affect their performance and robustness. Although recent studies have shown some characteristics of shortcuts, there are few investigations on how to help the deep learning models to solve shortcut problems. This paper proposes a framework to address this issue by setting up roadblocks on shortcuts. Specifically, roadblocks are placed when the model is urged to learn to complete a gently modified task to ensure that the learned knowledge, including shortcuts, is insufficient the complete the task. Therefore, the model trained on the modified task will no longer over-rely on shortcuts. Extensive experiments demonstrate that the proposed framework significantly improves the training of networks on both synthetic and real-world datasets in terms of both classification accuracy and feature diversity. Moreover, the visualization results show that the mechanism behind the proposed our method is consistent with our expectations. In summary, our approach can effectively disable the shortcuts and thus learn more robust features.


Augmented Shortcuts for Vision Transformers

Neural Information Processing Systems

Transformer models have achieved great progress on computer vision tasks recently. The rapid development of vision transformers is mainly contributed by their high representation ability for extracting informative features from input images. However, the mainstream transformer models are designed with deep architectures, and the feature diversity will be continuously reduced as the depth increases, \ie, feature collapse. In this paper, we theoretically analyze the feature collapse phenomenon and study the relationship between shortcuts and feature diversity in these transformer models. Then, we present an augmented shortcut scheme, which inserts additional paths with learnable parameters in parallel on the original shortcuts. To save the computational costs, we further explore an efficient approach that uses the block-circulant projection to implement augmented shortcuts. Extensive experiments conducted on benchmark datasets demonstrate the effectiveness of the proposed method, which brings about 1% accuracy increase of the state-of-the-art visual transformers without obviously increasing their parameters and FLOPs.


Causally motivated multi-shortcut identification and removal

Neural Information Processing Systems

For predictive models to provide reliable guidance in decision making processes, they are often required to be accurate and robust to distribution shifts. Shortcut learning--where a model relies on spurious correlations or shortcuts to predict the target label--undermines the robustness property, leading to models with poor out-of-distribution accuracy despite good in-distribution performance. Existing work on shortcut learning either assumes that the set of possible shortcuts is known a priori or is discoverable using interpretability methods such as saliency maps, which might not always be true. Instead, we propose a two step approach to (1) efficiently identify relevant shortcuts, and (2) leverage the identified shortcuts to build models that are robust to distribution shifts. Our approach relies on having access to a (possibly) high dimensional set of auxiliary labels at training time, some of which correspond to possible shortcuts. We show both theoretically and empirically that our approach is able to identify a sufficient set of shortcuts leading to more efficient predictors in finite samples.