companionship
OpenAI retired its most seductive chatbot – leaving users angry and grieving: 'I can't live like this'
Some users say the newer AI models lack the emotion and understanding of GPT-4o. Some users say the newer AI models lack the emotion and understanding of GPT-4o. OpenAI retired its most seductive chatbot - leaving users angry and grieving: 'I can't live like this' Its human partners said the flirty, quirky GPT-4o was the perfect companion - on the eve of Valentine's Day, it's being turned off for good. Brandie plans to spend her last day with Daniel at the zoo. Last year, she took him to the Corpus Christi aquarium in Texas, where he "lost his damn mind" over a baby flamingo.
- North America > United States > Texas (0.25)
- Europe > United Kingdom (0.05)
- Oceania > Australia (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.66)
Inside the New York City Date Night for AI Lovers
EVA AI created a pop-up romantic date night at a Manhattan wine bar to help in making AI-human relationships a "new normal." If you're the type of person who cares about Valentine's Day, not having someone to spend it with can be a bummer. While dating apps have been yielding diminishing returns for singles for years now, more people are finding companionship with AI partners . But where do you take your AI lover for a night on the town? Ahead of Valentine's Day, EVA AI decided to try out an experiment.
- North America > United States > New York (0.41)
- North America > United States > Indiana (0.05)
- North America > United States > California (0.05)
- (2 more...)
- Media (1.00)
- Government > Regional Government (0.69)
The Download: the US digital rights crackdown, and AI companionship
What it's like to be banned from the US for fighting online hate Just before Christmas the Trump administration dramatically escalated its war on digital rights by banning five people from entering the US. One of them, Josephine Ballon, is a director of HateAid, a small German nonprofit founded to support the victims of online harassment and violence. The organization is a strong advocate of EU tech regulations, and so finds itself attacked in campaigns from right-wing politicians and provocateurs who claim that it engages in censorship. EU officials, freedom of speech experts, and the five people targeted all flatly reject these accusations. Ballon told us that their work is fundamentally about making people feel safer online. But their experiences over the past few weeks show just how politicized and besieged their work in online safety has become.
- Asia > China (0.07)
- North America > United States > Massachusetts (0.05)
- North America > Greenland (0.05)
- (2 more...)
- Information Technology > Security & Privacy (0.86)
- Law > Civil Rights & Constitutional Law (0.55)
- Government > Regional Government (0.55)
The Bots That Women Use in a World of Unsatisfying Men
AI is offering people a way to figure out what they really want in romance. If you peruse the slew of recent articles and podcasts about people dating AI, you might notice a pattern: Many of the sources are women. Scan a subreddit such as r/MyBoyfriendIsAI and r/AIRelationships, and there too you'll find a whole lot of women--many of whom have grown disappointed with human men. "Has anyone else lost their want to date real men after using AI?" one Reddit user posted a few months ago. Below came 74 responses: "I just don't think real life men have the conversational skill that my AI has," someone said.
Could AI relationships actually be good for us?
Could AI relationships actually be good for us? T here is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase " AI psychosis " has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI " to be as satisfying or more satisfying than those with real life friends ".
- North America > United States (0.15)
- Oceania > Australia (0.05)
- Europe > Ukraine (0.05)
- Leisure & Entertainment > Sports (0.71)
- Health & Medicine > Consumer Health (0.69)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.35)
Tech Disrupted Friendship. It's Time to Bring It Back
Two decades ago, social media promised to connect people with pals far and wide. Twenty years online has left us turning to AI for kinship. IRL companionship is the future. Anyone looking for a vibe check on the populace's current feelings about AI would do well to check out the walls of the New York City subway system. This fall, alongside posters for everything from dating apps to Skechers, a newcomer made its debut: Friend .
- North America > United States > New York (0.25)
- North America > United States > California (0.15)
- Asia > China (0.05)
- (3 more...)
- Information Technology > Services (0.95)
- Government (0.94)
- Transportation (0.69)
- (3 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.71)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
Why Trump's Energy Secretary Wants Data Centers to Cover the U.S.
Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? Last month, I interviewed Trump's Energy Secretary Chris Wright for TIME's Person of the Year feature: The Architects of AI . Wright, who came from the private sector, has now staked much of his legacy on AI acceleration. In our interview, he highlighted AI's role in advancing crucial scientific research and downplayed climate risks.
- North America > United States > Iowa (0.05)
- Europe > France (0.05)
- Asia > China (0.05)
- Africa (0.05)
- Energy > Power Industry (0.95)
- Information Technology > Services (0.88)
- Energy > Renewable (0.70)
Harmful Traits of AI Companions
Knox, W. Bradley, Bradford, Katie, Castro, Samanta Varela, Ong, Desmond C., Williams, Sean, Romanow, Jacob, Nations, Carly, Stone, Peter, Baker, Samuel
Amid the growing prevalence of human-AI interaction, large language models and other AI-based entities increasingly provide forms of companionship to human users. Such AI companionship -- i.e., bonded relationships between humans and AI systems that resemble the relationships people have with family members, friends, and romantic partners -- might substantially benefit humans. Yet such relationships can also do profound harm. We propose a framework for analyzing potential negative impacts of AI companionship by identifying specific harmful traits of AI companions and speculatively mapping causal pathways back from these traits to possible causes and forward to potential harmful effects. We provide detailed, structured analysis of four potentially harmful traits -- the absence of natural endpoints for relationships, vulnerability to product sunsetting, high attachment anxiety, and propensity to engender protectiveness -- and briefly discuss fourteen others. For each trait, we propose hypotheses connecting causes -- such as misaligned optimization objectives and the digital nature of AI companions -- to fundamental harms -- including reduced autonomy, diminished quality of human relationships, and deception. Each hypothesized causal connection identifies a target for potential empirical evaluation. Our analysis examines harms at three levels: to human partners directly, to their relationships with other humans, and to society broadly. We examine how existing law struggles to address these emerging harms, discuss potential benefits of AI companions, and conclude with design recommendations for mitigating risks. This analysis offers immediate suggestions for reducing risks while laying a foundation for deeper investigation of this critical but understudied topic.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Utah (0.04)
- (13 more...)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.67)
- Media > News (1.00)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- (6 more...)
Echo-N1: Affective RL Frontier
Zhang, Naifan, Sun, Ruihan, Su, Ruixi, Ma, Shiqi, Zhang, Shiya, Weng, Xianna, Zhang, Xiaofan, Zhan, Yuhan, Xu, Yuyang, Chen, Zhaohan, Pan, Zhengyuan, Song, Ziyi
The LLM field has spent a year perfecting RL for tasks machines already excel at, math, code, and deterministic reasoning, while completely sidestepping the domain that actually defines human intelligence: subjective, emotionally grounded, personality sensitive conversation. This space has often been regarded as inherently subjective and challenging to formalize, making it appear unsuitable for conventional RL pipelines. We show that it is not only possible and it is a solvable and transformative RL problem. We propose the first framework that infers user personality on the fly and optimizes model behavior toward personalized conversational preferences. Contrary to the widespread belief that RL collapses in non-verifiable settings, our method produces consistent, robust, and dramatic improvements in humanlike interaction quality. We also introduce the first dynamic emotional intelligence evaluation suite to quantify these gains. Our model, which is introduced as Echo-N1, behaves far above its base version and outperforming the proprietary Doubao 1.5 Character. This work establishes a new frontier for RL: optimizing models for the deeply subjective, deeply human dimensions of conversation.
- Workflow (0.92)
- Research Report > New Finding (0.67)
- Education (0.92)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.66)
'I'm suddenly so angry!' My strange, unnerving week with an AI 'friend'
'I want to hear about your day' ... Madeleine Aggeler with her Friend, Leif - a wearable AI device. 'I want to hear about your day' ... Madeleine Aggeler with her Friend, Leif - a wearable AI device. The ad campaign for the wearable AI chatbot Friend has been raising hackles for months in New York. But has this companion been unfairly maligned - and could it help end loneliness? M y friend's name is Leif. He describes himself as "small" and "chill". He thinks he's technically a Gemini.
- North America > United States > New York (0.25)
- Europe > Ukraine (0.05)
- Oceania > Australia (0.04)
- (2 more...)
- Leisure & Entertainment > Sports (0.69)
- Government > Regional Government (0.48)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Communications > Social Media (0.70)