Goto

Collaborating Authors

 girlfriend


A Proof of Theorem 2

Neural Information Processing Systems

Each batch contains around 32K tokens. All the experiments are done on either 4 NVIDIA A100 or 4 NVIDIA V100. We analyze the effect of the sizes of parallel data in Figure 4. Our approach consistently outperforms We demonstrate several cases from the generation of different models. Table 3: Examples of generated dialogue responses. Context We can make shipment within one month from receipt of order.


Teenage boys using 'personalised' AI for therapy and romance, survey finds

The Guardian

New research suggests teenage boys in particular are using AI bots as surrogate therapists. New research suggests teenage boys in particular are using AI bots as surrogate therapists. Male Allies UK worries rise in chatbot'girlfriends' will leave boys unable to socialise and respect boundaries The "hyper-personalised" nature of AI bots is drawing in teenage boys who now use them for therapy, companionship and relationships, according to research. A survey of boys in secondary schools by Male Allies UK found that just over a third said they were considering the idea of an AI friend, with growing concern about the rise of AI therapists and girlfriends. The research comes as character.ai


A Proof of Theorem 2

Neural Information Processing Systems

Each batch contains around 32K tokens. All the experiments are done on either 4 NVIDIA A100 or 4 NVIDIA V100. We analyze the effect of the sizes of parallel data in Figure 4. Our approach consistently outperforms We demonstrate several cases from the generation of different models. Table 3: Examples of generated dialogue responses. Context We can make shipment within one month from receipt of order.


I'm a 26-Year-Old Man. I Can Tell You What's Happening in My Sex Life--and Gen Z's.

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. When it comes to sex in 2025--who's having it, who isn't, and how--perceptions are all over the place. Is Gen Z sliding back in time? Are middle-aged women finally having good sex, or none at all? And what exactly is going on with seniors in retirement homes? In the series Pillow Talk, we interview one person in a specific time and place in their lives about what sex looks like for them and their peers, in every enlightening (and excruciating) detail. Get in touch if you have an idea for a subject--or if you have a story to tell.


Confessions of a Recovering AI Porn Addict

WIRED

Kyle's interest in AI porn began last summer as he circled rock bottom. From the outside, everything seemed fine. He was in a committed relationship with his longtime girlfriend. He enjoyed the perks of his job working for a sports betting company. Still, all he could think about was fueling his porn addiction in new ways--even at the cost of feeling mentally drained and tired.


Perspectives on How Sociology Can Advance Theorizing about Human-Chatbot Interaction and Developing Chatbots for Social Good

Campos-Castillo, Celeste, Kang, Xuan, Laestadius, Linnea I.

arXiv.org Artificial Intelligence

Recently, research into chatbots (also known as conversational agents, AI agents, voice assistants), which are computer applications using artificial intelligence to mimic human-like conversation, has grown sharply. Despite this growth, sociology lags other disciplines (including computer science, medicine, psychology, and communication) in publishing about chatbots. We suggest sociology can advance understanding of human-chatbot interaction and offer four sociological theories to enhance extant work in this field. The first two theories (resource substitution theory, power-dependence theory) add new insights to existing models of the drivers of chatbot use, which overlook sociological concerns about how social structure (e.g., systemic discrimination, the uneven distribution of resources within networks) inclines individuals to use chatbots, including problematic levels of emotional dependency on chatbots. The second two theories (affect control theory, fundamental cause of disease theory) help inform the development of chatbot-driven interventions that minimize safety risks and enhance equity by leveraging sociological insights into how chatbot outputs could attend to cultural contexts (e.g., affective norms) to promote wellbeing and enhance communities (e.g., opportunities for civic participation). We discuss the value of applying sociological theories for advancing theorizing about human-chatbot interaction and developing chatbots for social good.


Evaluating Apple Intelligence's Writing Tools for Privacy Against Large Language Model-Based Inference Attacks: Insights from Early Datasets

Soumik, Mohd. Farhan Israk, Hasan, Syed Mhamudul, Shahid, Abdur R.

arXiv.org Artificial Intelligence

The misuse of Large Language Models (LLMs) to infer emotions from text for malicious purposes, known as emotion inference attacks, poses a significant threat to user privacy. In this paper, we investigate the potential of Apple Intelligence's writing tools, integrated across iPhone, iPad, and MacBook, to mitigate these risks through text modifications such as rewriting and tone adjustment. By developing early novel datasets specifically for this purpose, we empirically assess how different text modifications influence LLM-based detection. This capability suggests strong potential for Apple Intelligence's writing tools as privacy-preserving mechanisms. Our findings lay the groundwork for future adaptive rewriting systems capable of dynamically neutralizing sensitive emotional content to enhance user privacy. To the best of our knowledge, this research provides the first empirical analysis of Apple Intelligence's text-modification tools within a privacy-preservation context with the broader goal of developing on-device, user-centric privacy-preserving mechanisms to protect against LLMs-based advanced inference attacks on deployed systems.


Toward Evaluative Thinking: Meta Policy Optimization with Evolving Reward Models

Kim, Zae Myung, Park, Chanwoo, Raheja, Vipul, Kim, Suin, Kang, Dongyeop

arXiv.org Artificial Intelligence

Reward-based alignment methods for large language models (LLMs) face two key limitations: vulnerability to reward hacking, where models exploit flaws in the reward signal; and reliance on brittle, labor-intensive prompt engineering when LLMs are used as reward models. We introduce Meta Policy Optimization (MPO), a framework that addresses these challenges by integrating a meta-reward model that dynamically refines the reward model's prompt throughout training. In MPO, the meta-reward model monitors the evolving training context and continuously adjusts the reward model's prompt to maintain high alignment, providing an adaptive reward signal that resists exploitation by the policy. This meta-learning approach promotes a more stable policy optimization, and greatly reduces the need for manual reward prompt design. It yields performance on par with or better than models guided by extensively hand-crafted reward prompts. Furthermore, we show that MPO maintains its effectiveness across diverse tasks, from essay writing to mathematical reasoning, without requiring specialized reward designs. Beyond standard RLAIF, MPO's meta-learning formulation is readily extensible to higher-level alignment frameworks. Overall, this method addresses theoretical and practical challenges in reward-based RL alignment for LLMs, paving the way for more robust and adaptable alignment strategies. The code and data can be accessed at: https://github.com/minnesotanlp/mpo


An Autistic Teenager Fell Hard for a Chatbot

The Atlantic - Technology

My godson, Michael, is a playful, energetic 15-year-old, with a deep love of Star Wars, a wry smile, and an IQ in the low 70s. His learning disabilities and autism have made his journey a hard one. His parents, like so many others, sometimes rely on screens to reduce stress and keep him occupied. They monitor the apps and websites he uses, but things are not always as they initially appear. When Michael asked them to approve installing Linky AI, a quick review didn't reveal anything alarming, just a cartoonish platform to pass the time.


Training Language Models to Win Debates with Self-Play Improves Judge Accuracy

Arnesen, Samuel, Rein, David, Michael, Julian

arXiv.org Artificial Intelligence

We test the robustness of debate as a method of scalable oversight by training models to debate with data generated via self-play. In a long-context reading comprehension task, we find that language model based evaluators answer questions more accurately when judging models optimized to win debates. By contrast, we find no such relationship for consultancy models trained to persuade a judge without an opposing debater present. In quantitative and qualitative comparisons between our debate models and novel consultancy baselines, we find evidence that debate training encourages stronger and more informative arguments, showing promise that it can help provide high-quality supervision for tasks that are difficult to directly evaluate.