Goto

Collaborating Authors

 distraction


This AI Tool Will Tell You to Stop Slacking Off

WIRED

Fomi watches you work, then scolds you when your attention wanders. It's helpful, but there are privacy issues to consider. I've tested a lot of software tools over the years designed to block distractions and keep you focused. None of them work perfectly, mostly because of context. Reddit, for example, is something I should generally avoid during the workday, so I tend to block it--this is a good decision for me overall.


Shokz OpenFit Pro review: Reducing distractions while keeping your ears open

Engadget

Apple could unveil Gemini-powered Siri in Feb. These open-fit earbuds actually make a difference with background noise. Rarely does a set of open-fit earbuds actually impress me. I tend to find them underwhelming because overall sound quality is subpar compared to the more "traditional" in-ear models. The first time I used the Shokz OpenFit Pro ($249.95)


Policy-shaped prediction: avoiding distractions in model-based reinforcement learning

Neural Information Processing Systems

Model-based reinforcement learning (MBRL) is a promising route to sample-efficient policy optimization. However, a known vulnerability of reconstruction-based MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods ---including DreamerV3 and DreamerPro--- with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through a synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.


How to Reclaim Your Mind

The New Yorker

Can You Reclaim Your Mind? To feel mentally alive, you have to do more than defeat distraction. Looking back over the columns I've written in 2025, I can see that a lot of them, broadly construed, have been about reclaiming one's mind. I wrote about living in the present, picturing the future, and exploring one's memories; about reading, learning, and making the most of one's spare time; and about whether artificial intelligence will end up expanding our thinking or limiting it . The shared subject was resistance to the forces, malevolent or inertial, that can render us mentally exhausted and scattered.


Pay Less Attention to Function Words for Free Robustness of Vision-Language Models

Tian, Qiwei, Lin, Chenhao, Zhao, Zhengyu, Shen, Chao

arXiv.org Artificial Intelligence

T o address the trade-off between robustness and performance for robust VLM, we observe that function words could incur vulnerability of VLMs against cross-modal adversarial attacks, and propose Function-word De-Attention (FDA) accordingly to mitigate the impact of function words. Similar to differential amplifiers, our FDA calculates the original and the function-word cross-attention within attention heads, and differentially subtracts the latter from the former for more aligned and robust VLMs. Comprehensive experiments include 2 SOTA baselines under 6 different attacks on 2 downstream tasks, 3 datasets, and 3 models. Overall, our FDA yields an average 18/13/53% ASR drop with only 0.2/0.3/0.6% performance drops on the 3 tested models on retrieval, and a 90% ASR drop with a 0.3% performance gain on visual grounding. W e demonstrate the scalability, generalization, and zero-shot performance of FDA experimentally, as well as in-depth ablation studies and analysis. Code will be made publicly available.



Policy-shaped prediction: avoiding distractions in model-based reinforcement learning

Neural Information Processing Systems

Model-based reinforcement learning (MBRL) is a promising route to sample-efficient policy optimization. However, a known vulnerability of reconstruction-based MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics.


Towards Robust Bisimulation Metric Learning

Neural Information Processing Systems

Learned representations in deep reinforcement learning (DRL) have to extract task-relevant information from complex observations, balancing between robustness to distraction and informativeness to the policy.


Studying the Effects of Robot Intervention on School Shooters in Virtual Reality

McClurg, Christopher A, Wagner, Alan R

arXiv.org Artificial Intelligence

We advance the understanding of robotic intervention in high-risk scenarios by examining their potential to distract and impede a school shooter. To evaluate this concept, we conducted a virtual reality study with 150 university participants role-playing as a school shooter. Within the simulation, an autonomous robot predicted the shooter's movements and positioned itself strategically to interfere and distract. The strategy the robot used to approach the shooter was manipulated -- either moving directly in front of the shooter (aggressive) or maintaining distance (passive) -- and the distraction method, ranging from no additional cues (low), to siren and lights (medium), to siren, lights, and smoke to impair visibility (high). An aggressive, high-distraction robot reduced the number of victims by 46.6% relative to a no-robot control. This outcome underscores both the potential of robotic intervention to enhance safety and the pressing ethical questions surrounding their use in school environments.


State Your Intention to Steer Your Attention: An AI Assistant for Intentional Digital Living

Choi, Juheon, Lee, Juyong, Kim, Jian, Kim, Chanyoung, Min, Taywon, Knox, W. Bradley, Lee, Min Kyung, Lee, Kimin

arXiv.org Artificial Intelligence

When working on digital devices, people often face distractions that can lead to a decline in productivity and efficiency, as well as negative psychological and emotional impacts. To address this challenge, we introduce a novel Artificial Intelligence (AI) assistant that elicits a user's intention, assesses whether ongoing activities are in line with that intention, and provides gentle nudges when deviations occur. The system leverages a large language model to analyze screenshots, application titles, and URLs, issuing notifications when behavior diverges from the stated goal. Its detection accuracy is refined through initial clarification dialogues and continuous user feedback. In a three-week, within-subjects field deployment with 22 participants, we compared our assistant to both a rule-based intent reminder system and a passive baseline that only logged activity. Results indicate that our AI assistant effectively supports users in maintaining focus and aligning their digital behavior with their intentions. Our source code is publicly available at https://intentassistant.github.io