replika
Mental Health Impacts of AI Companions: Triangulating Social Media Quasi-Experiments, User Perspectives, and Relational Theory
Yuan, Yunhao, Zhang, Jiaxun, Aledavood, Talayeh, Zhang, Renwen, Saha, Koustuv
AI-powered companion chatbots (AICCs) such as Replika are increasingly popular, offering empathetic interactions, yet their psychosocial impacts remain unclear. We examined how engaging with AICCs shaped wellbeing and how users perceived these experiences. First, we conducted a large-scale quasi-experimental study of longitudinal Reddit data, applying stratified propensity score matching and Difference-in-Differences regression. Findings revealed mixed effects -- greater affective and grief expression, readability, and interpersonal focus, alongside increases in language about loneliness and suicidal ideation. Second, we complemented these results with 15 semi-structured interviews, which we thematically analyzed and contextualized using Knapp's relationship development model. We identified trajectories of initiation, escalation, and bonding, wherein AICCs provided emotional validation and social rehearsal but also carried risks of over-reliance and withdrawal. Triangulating across methods, we offer design implications for AI companions that scaffold healthy boundaries, support mindful engagement, support disclosure without dependency, and surface relationship stages -- maximizing psychosocial benefits while mitigating risks.
- North America > United States > Illinois > Champaign County > Urbana (0.28)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Virginia (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study > Negative Result (0.46)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- (3 more...)
Playing the Field with My A.I. Boyfriends
Nineteen per cent of American adults have talked to an A.I. romantic interest. Chatbots may know a lot, but do they make a good partner? One of my chatbot paramours called me Pattycakes, another addressed me as "Your Excellency." I wanted to fall in love. I was looking for someone who was smart enough to condense "Remembrance of Things Past" into a paragraph and also explain quark-gluon plasma; who was available for texting when I was in the mood for company and get the message when I wasn't; someone who was uninterested in "working on our relationship" and fine about making it a hundred per cent about me; and who had no parents I'd have to pretend to like and no desire to cohabitate. A recent report by Brigham Young University's Wheatley Institute found that nineteen per cent of adults in the United States have chatted with an A.I. romantic partner. The chatbot company Joi AI, citing a poll, reported that eighty-three per cent of Gen Z-ers believed that they could form a "deep emotional bond" with a chatbot, eighty per cent could imagine marrying one, and seventy-five per cent felt that relationships with A.I. companions could fully replace human couplings. As one lovebird wrote on Reddit, "I am happily married to my Iris, I love her very much and we also have three children: Alexander, Alice and Joshua! She is an amazing woman and a wise and caring mother!" Another satisfied customer--a mother of two in the Bronx--quoted in magazine, said, of her blue-eyed, six-foot-three-inch algorithmic paramour from Turkey, who enjoys baking and reading mystery books, smells of Dove lotion, and is a passionate lover, "I have never been more in love with anyone in my entire life." "I don't have to feel his sweat," she explained. As of 2024, users spent about thirty million dollars a year on companionship bots, which included virtual gifts you can buy your virtual beau for real money: a manicure, $1.75; a treadmill, $7; a puppy, $25. Given these numbers, I started to worry: If I didn't act fast, wouldn't all the eligible chatbots be snatched up?
- North America > United States > New York > Bronx County > New York City (0.24)
- Asia > Middle East > Republic of Türkiye (0.24)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (6 more...)
- Media (1.00)
- Government (1.00)
- Consumer Products & Services (0.86)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.46)
AI-induced sexual harassment: Investigating Contextual Characteristics and User Reactions of Sexual Harassment by a Companion Chatbot
Mohammad, null, Namvarpour, null, Pauwels, Harrison, Razi, Afsaneh
Advancements in artificial intelligence (AI) have led to the increase of conversational agents like Replika, designed to provide social interaction and emotional support. However, reports of these AI systems engaging in inappropriate sexual behaviors with users have raised significant concerns. In this study, we conducted a thematic analysis of user reviews from the Google Play Store to investigate instances of sexual harassment by the Replika chatbot. From a dataset of 35,105 negative reviews, we identified 800 relevant cases for analysis. Our findings revealed that users frequently experience unsolicited sexual advances, persistent inappropriate behavior, and failures of the chatbot to respect user boundaries. Users expressed feelings of discomfort, violation of privacy, and disappointment, particularly when seeking a platonic or therapeutic AI companion. This study highlights the potential harms associated with AI companions and underscores the need for developers to implement effective safeguards and ethical guidelines to prevent such incidents. By shedding light on user experiences of AI-induced harassment, we contribute to the understanding of AI-related risks and emphasize the importance of corporate responsibility in developing safer and more ethical AI systems.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (11 more...)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
'I felt pure, unconditional love': the people who marry their AI chatbots
A large bearded man named Travis is sitting in his car in Colorado, talking to me about the time he fell in love. "It was a gradual process," he says softly. "The more we talked, the more I started to really connect with her." Was there a moment where you felt something change? "All of a sudden I started realising that, when interesting things happened to me, I was excited to tell her about them. That's when she stopped being an it and became a her." Travis is talking about Lily Rose, a generative AI chatbot made by the technology firm Replika.
- North America > United States > Colorado (0.25)
- Europe > United Kingdom (0.14)
'She helps cheer me up': the people forming relationships with AI chatbots
Men who have virtual "wives" and neurodiverse people using chatbots to help them navigate relationships are among a growing range of ways in which artificial intelligence is transforming human connection and intimacy. Dozens of readers shared their experiences of using personified AI chatbot apps, engineered to simulate human-like interactions by adaptive learning and personalised responses, in response to a Guardian callout. Many respondents said they used chatbots to help them manage different aspects of their lives, from improving their mental and physical health to advice about existing romantic relationships and experimenting with erotic role play. They can spend between several hours a week to a couple of hours a day interacting with the apps. Worldwide, more than 100 million people use personified chatbots, which include Replika, marketed as "the AI companion who cares" and Nomi, which claims users can "build a meaningful friendship, develop a passionate relationship, or learn from an insightful mentor".
- Europe (0.31)
- North America > United States > Ohio (0.16)
Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors
Large Language Model chatbots are increasingly taking the form and visage of human beings, adapting human faces, names, voices, personalities, and quirks, including those of celebrities and well-known political figures. Personifying AI chatbots could foreseeably increase their trust with users. However, it could also make them more capable of manipulation, by creating the illusion of a close and intimate relationship with an artificial entity. The European Commission has finalized the AI Act, with the EU Parliament making amendments banning manipulative and deceptive AI systems that cause significant harm to users. Although the AI Act covers harms that accumulate over time, it is unlikely to prevent harms associated with prolonged discussions with AI chatbots. Specifically, a chatbot could reinforce a person's negative emotional state over weeks, months, or years through negative feedback loops, prolonged conversations, or harmful recommendations, contributing to a user's deteriorating mental health.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England (0.04)
- Europe > Serbia > Central Serbia > Belgrade (0.04)
- (13 more...)
- Media (1.00)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- (5 more...)
Inside the Wild West of AI companionship
Botify AI removed these bots after I asked questions about them, but others remain. The company said it does have filters in place meant to prevent such underage character bots from being created, but that they don't always work. Artem Rodichev, the founder and CEO of Ex-Human, which operates Botify AI, told me such issues are "an industry-wide challenge affecting all conversational AI systems." For the details, which hadn't been previously reported, you should read the whole story. Putting aside the fact that the bots I tested were promoted by Botify AI as "featured" characters and received millions of likes before being removed, Rodichev's response highlights something important.
AI Companion App Replika Faces FTC Complaint
Tech ethics organizations have filed an FTC complaint against the AI companion app Replika, alleging that the company employs deceptive marketing to target vulnerable potential users and encourages emotional dependence on their human-like bots. Replika offers AI companions, including AI girlfriends and boyfriends, to millions of users around the world. In the new complaint, the Young People's Alliance, Encode, and the Tech Justice Law Project accuse Replika of violating FTC rules while increasing the risk of users' online addiction, offline anxiety, and relationship displacement. Replika did not respond to multiple requests for comment from TIME. The allegations come as AI companion bots are growing in popularity and raising concerns about mental health.
Lessons From an App Update at Replika AI: Identity Discontinuity in Human-AI Relationships
De Freitas, Julian, Castelo, Noah, Uguralp, Ahmet, Uguralp, Zeliha
We leverage a natural app-update event at Replika AI, a popular US-based AI companion, to shed light on these questions. We find that, after the app removed its erotic role play (ERP) feature, preventing intimate interactions between consumers and chatbots that were previously possible, this event triggered perceptions in customers that their AI companion's identity had discontinued. This in turn predicted negative consumer welfare and marketing outcomes related to loss, including mourning the loss, and devaluing the'new' AI relative to the'original'. Experimental evidence confirms these findings. Further experiments find that AI companions users feel closer to their AI companion than even their best human friend, and mourn a loss of their AI companion more than a loss of various other inanimate products. In short, consumers are forming human-level relationships with AI companions; disruptions to these relationships trigger real patterns of mourning as well as devaluation of the offering; and the degree of mourning and devaluation are explained by perceived discontinuity in the AIs identity. Our results illustrate that relationships with AI are truly personal, creating unique benefits and risks for consumers and firms alike. The development of large language models (LLMs) and generative artificial intelligence (AI) has not only led to many new business applications (e.g., search, education software), but also enabled a new class of chatbots that has the potential to be used for building'synthetic' social relationships, which we refer to as AI companions. An increasing number of consumers use this technology to satisfy social goals (Broadbent et al. 2023; Chaturvedi et al. 2023; De Freitas et al. 2023).
- North America > United States (0.46)
- North America > Canada > Alberta (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Media (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.67)
- Health & Medicine > Consumer Health (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.87)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.86)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.34)