Goto

Collaborating Authors

 motivation




Neuroscientists Decipher Procrastination: A Brain Mechanism Explains Why People Leave Certain Tasks for Later

WIRED

New research has discovered that a neural circuit may explain procrastination. Scientists were able to disrupt this connection using a drug. The brain avoids unpleasant tasks even if they promise reward, according to a recent study. The reason you decide to postpone household chores and spend your time browsing social media could be explained by the workings of a brain circuit. Recent research has identified a neural connection responsible for delaying the start of activities associated with unpleasant experiences, even when these activities offer a clear reward.


Control What You Can: Intrinsically Motivated Task-Planning Agent

Neural Information Processing Systems

We present a novel intrinsically motivated agent that learns how to control the environment in a sample efficient manner, that is with as few environment interactions as possible, by optimizing learning progress. It learns what can be controlled, how to allocate time and attention as well as the relations between objects using surprise-based motivation. The effectiveness of our method is demonstrated in a synthetic and robotic manipulation environment yielding considerably improved performance and smaller sample complexity compared to an intrinsically motivated, non-hierarchical and state-of-the-art hierarchical baseline. In a nutshell, our work combines several task-level planning agent structures (backtracking search on task-graph, probabilistic road-maps, allocation of search efforts) with intrinsic motivation to achieve learning from scratch.


Future You: Designing and Evaluating Multimodal AI-generated Digital Twins for Strengthening Future Self-Continuity

Albrecht, Constanze, Archiwaranguprok, Chayapatr, Poonsiriwong, Rachel, Chen, Awu, Yin, Peggy, Lertsutthiwong, Monchai, Winson, Kavin, Hershfield, Hal, Maes, Pattie, Pataranutaporn, Pat

arXiv.org Artificial Intelligence

What if users could meet their future selves today? AI-generated future selves simulate meaningful encounters with a digital twin decades in the future. As AI systems advance, combining cloned voices, age-progressed facial rendering, and autobiographical narratives, a central question emerges: Does the modality of these future selves alter their psychological and affective impact? How might a text-based chatbot, a voice-only system, or a photorealistic avatar shape present-day decisions and our feeling of connection to the future? We report a randomized controlled study (N=92) evaluating three modalities of AI-generated future selves (text, voice, avatar) against a neutral control condition. We also report a systematic model evaluation between Claude 4 and three other Large Language Models (LLMs), assessing Claude 4 across psychological and interaction dimensions and establishing conversational AI quality as a critical determinant of intervention effectiveness. All personalized modalities strengthened Future Self-Continuity (FSC), emotional well-being, and motivation compared to control, with avatar producing the largest vividness gains, yet with no significant differences between formats. Interaction quality metrics, particularly persuasiveness, realism, and user engagement, emerged as robust predictors of psychological and affective outcomes, indicating that how compelling the interaction feels matters more than the form it takes. Content analysis found thematic patterns: text emphasized career planning, while voice and avatar facilitated personal reflection. Claude 4 outperformed ChatGPT 3.5, Llama 4, and Qwen 3 in enhancing psychological, affective, and FSC outcomes.


Let Them Down Easy! Contextual Effects of LLM Guardrails on User Perceptions and Preferences

Zheng, Mingqian, Hu, Wenjia, Zhao, Patrick, Eslami, Motahhare, Hwang, Jena D., Brahman, Faeze, Rose, Carolyn, Sap, Maarten

arXiv.org Artificial Intelligence

Current LLMs are trained to refuse potentially harmful input queries regardless of whether users actually had harmful intents, causing a tradeoff between safety and user experience. Through a study of 480 participants evaluating 3,840 query-response pairs, we examine how different refusal strategies affect user perceptions across varying motivations. Our findings reveal that response strategy largely shapes user experience, while actual user motivation has negligible impact. Partial compliance -- providing general information without actionable details -- emerges as the optimal strategy, reducing negative user perceptions by over 50% to flat-out refusals. Complementing this, we analyze response patterns of 9 state-of-the-art LLMs and evaluate how 6 reward models score different refusal strategies, demonstrating that models rarely deploy partial compliance naturally and reward models currently undervalue it. This work demonstrates that effective guardrails require focusing on crafting thoughtful refusals rather than detecting intent, offering a path toward AI safety mechanisms that ensure both safety and sustained user engagement.


A Taxonomy of Pix Fraud in Brazil: Attack Methodologies, AI-Driven Amplification, and Defensive Strategies

Pizzolato, Glener Lanes, Lopes, Brenda Medeiros, Schepke, Claudio, Kreutz, Diego

arXiv.org Artificial Intelligence

This work presents a review of attack methodologies targeting Pix, the instant payment system launched by the Central Bank of Brazil in 2020. The study aims to identify and classify the main types of fraud affecting users and financial institutions, highlighting the evolution and increasing sophistication of these techniques. The methodology combines a structured literature review with exploratory interviews conducted with professionals from the banking sector. The results show that fraud schemes have evolved from purely social engineering approaches to hybrid strategies that integrate human manipulation with technical exploitation. The study concludes that security measures must advance at the same pace as the growing complexity of attack methodologies, with particular emphasis on adaptive defenses and continuous user awareness.


much like further exchanges to improve our work, but the following is our best effort within the current limits

Neural Information Processing Systems

We sincerely appreciate the reviewers for their careful reading, constructive questions and suggestions. First, we address questions appeared at least twice. We write P1, P2 for paragraph reference, and Rx for reviewers. That is, they only consider when the representations are precisely equal. To the best of our knowledge, our work is the first to incorporate continuous similarity into designing GNN.