Goto

Collaborating Authors

 companionship


OpenAI retired its most seductive chatbot – leaving users angry and grieving: 'I can't live like this'

The Guardian

Some users say the newer AI models lack the emotion and understanding of GPT-4o. Some users say the newer AI models lack the emotion and understanding of GPT-4o. OpenAI retired its most seductive chatbot - leaving users angry and grieving: 'I can't live like this' Its human partners said the flirty, quirky GPT-4o was the perfect companion - on the eve of Valentine's Day, it's being turned off for good. Brandie plans to spend her last day with Daniel at the zoo. Last year, she took him to the Corpus Christi aquarium in Texas, where he "lost his damn mind" over a baby flamingo.


Inside the New York City Date Night for AI Lovers

WIRED

EVA AI created a pop-up romantic date night at a Manhattan wine bar to help in making AI-human relationships a "new normal." If you're the type of person who cares about Valentine's Day, not having someone to spend it with can be a bummer. While dating apps have been yielding diminishing returns for singles for years now, more people are finding companionship with AI partners . But where do you take your AI lover for a night on the town? Ahead of Valentine's Day, EVA AI decided to try out an experiment.


The Download: the US digital rights crackdown, and AI companionship

MIT Technology Review

What it's like to be banned from the US for fighting online hate Just before Christmas the Trump administration dramatically escalated its war on digital rights by banning five people from entering the US. One of them, Josephine Ballon, is a director of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. The organization is a strong advocate of EU tech regulations, and so finds itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship. EU officials, freedom of speech experts, and the five people targeted all flatly reject these accusations. Ballon told us that their work is fundamentally about making people feel safer online. But their experiences over the past few weeks show just how politicized and besieged their work in online safety has become.


The Bots That Women Use in a World of Unsatisfying Men

The Atlantic - Technology

AI is offering people a way to figure out what they really want in romance. If you peruse the slew of recent articles and podcasts about people dating AI, you might notice a pattern: Many of the sources are women. Scan a subreddit such as r/MyBoyfriendIsAI and r/AIRelationships, and there too you'll find a whole lot of women--many of whom have grown disappointed with human men. "Has anyone else lost their want to date real men after using AI?" one Reddit user posted a few months ago. Below came 74 responses: "I just don't think real life men have the conversational skill that my AI has," someone said.


Could AI relationships actually be good for us?

The Guardian

Could AI relationships actually be good for us? T here is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase " AI psychosis " has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI " to be as satisfying or more satisfying than those with real life friends ".


Tech Disrupted Friendship. It's Time to Bring It Back

WIRED

Two decades ago, social media promised to connect people with pals far and wide. Twenty years online has left us turning to AI for kinship. IRL companionship is the future. Anyone looking for a vibe check on the populace's current feelings about AI would do well to check out the walls of the New York City subway system. This fall, alongside posters for everything from dating apps to Skechers, a newcomer made its debut: Friend .


Why Trump's Energy Secretary Wants Data Centers to Cover the U.S.

TIME - Tech

Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? Last month, I interviewed Trump's Energy Secretary Chris Wright for TIME's Person of the Year feature: The Architects of AI . Wright, who came from the private sector, has now staked much of his legacy on AI acceleration. In our interview, he highlighted AI's role in advancing crucial scientific research and downplayed climate risks.


Harmful Traits of AI Companions

Knox, W. Bradley, Bradford, Katie, Castro, Samanta Varela, Ong, Desmond C., Williams, Sean, Romanow, Jacob, Nations, Carly, Stone, Peter, Baker, Samuel

arXiv.org Artificial Intelligence

Amid the growing prevalence of human-AI interaction, large language models and other AI-based entities increasingly provide forms of companionship to human users. Such AI companionship -- i.e., bonded relationships between humans and AI systems that resemble the relationships people have with family members, friends, and romantic partners -- might substantially benefit humans. Yet such relationships can also do profound harm. We propose a framework for analyzing potential negative impacts of AI companionship by identifying specific harmful traits of AI companions and speculatively mapping causal pathways back from these traits to possible causes and forward to potential harmful effects. We provide detailed, structured analysis of four potentially harmful traits -- the absence of natural endpoints for relationships, vulnerability to product sunsetting, high attachment anxiety, and propensity to engender protectiveness -- and briefly discuss fourteen others. For each trait, we propose hypotheses connecting causes -- such as misaligned optimization objectives and the digital nature of AI companions -- to fundamental harms -- including reduced autonomy, diminished quality of human relationships, and deception. Each hypothesized causal connection identifies a target for potential empirical evaluation. Our analysis examines harms at three levels: to human partners directly, to their relationships with other humans, and to society broadly. We examine how existing law struggles to address these emerging harms, discuss potential benefits of AI companions, and conclude with design recommendations for mitigating risks. This analysis offers immediate suggestions for reducing risks while laying a foundation for deeper investigation of this critical but understudied topic.


Echo-N1: Affective RL Frontier

Zhang, Naifan, Sun, Ruihan, Su, Ruixi, Ma, Shiqi, Zhang, Shiya, Weng, Xianna, Zhang, Xiaofan, Zhan, Yuhan, Xu, Yuyang, Chen, Zhaohan, Pan, Zhengyuan, Song, Ziyi

arXiv.org Artificial Intelligence

The LLM field has spent a year perfecting RL for tasks machines already excel at, math, code, and deterministic reasoning, while completely sidestepping the domain that actually defines human intelligence: subjective, emotionally grounded, personality sensitive conversation. This space has often been regarded as inherently subjective and challenging to formalize, making it appear unsuitable for conventional RL pipelines. We show that it is not only possible and it is a solvable and transformative RL problem. We propose the first framework that infers user personality on the fly and optimizes model behavior toward personalized conversational preferences. Contrary to the widespread belief that RL collapses in non-verifiable settings, our method produces consistent, robust, and dramatic improvements in humanlike interaction quality. We also introduce the first dynamic emotional intelligence evaluation suite to quantify these gains. Our model, which is introduced as Echo-N1, behaves far above its base version and outperforming the proprietary Doubao 1.5 Character. This work establishes a new frontier for RL: optimizing models for the deeply subjective, deeply human dimensions of conversation.


'I'm suddenly so angry!' My strange, unnerving week with an AI 'friend'

The Guardian

'I want to hear about your day' ... Madeleine Aggeler with her Friend, Leif - a wearable AI device. 'I want to hear about your day' ... Madeleine Aggeler with her Friend, Leif - a wearable AI device. The ad campaign for the wearable AI chatbot Friend has been raising hackles for months in New York. But has this companion been unfairly maligned - and could it help end loneliness? M y friend's name is Leif. He describes himself as "small" and "chill". He thinks he's technically a Gemini.