Goto

Collaborating Authors

 ai character


Quantifying and Optimizing Global Faithfulness in Persona-driven Role-playing

Neural Information Processing Systems

Persona-driven role-playing (PRP) aims to build AI characters that can respond to user queries by faithfully sticking with \emph{all} (factual) statements in persona documents.Unfortunately, existing faithfulness criteria for PRP are limited to coarse-grained LLM-based scoring without a clear definition or formulation.This paper presents a pioneering exploration to quantify PRP faithfulness evaluation as a fine-grained and explainable criterion, which also serves as a reliable reference for faithfulness optimization.Our criterion first discriminates persona statements into \emph{active} and \emph{passive} constraints by identifying the query-statement relevance.Then, we incorporate all constraints following the principle that the AI character's response should be (a) entailed by active constraints and (b) not contradicted by passive constraints.We translate this principle mathematically into a novel Active-Passive-Constraint (APC) score, a constraint-wise sum of statement-to-response natural language inference (NLI) scores weighted by constraint-query relevance scores. In practice, we build the APC scoring system by symbolically distilling small NLI and relevance discriminators (300M parameters) from GPT-4 for efficiency, and both show high consistency with GPT-4's discrimination.We validate the quality of the APC score against human evaluation based on example personas with tens of statements, and the results show a high correlation.As the APC score could faithfully reflect the PRP quality, we further leverage it as a reward system in direct preference optimization (DPO) for better AI characters. Our experiments offer a fine-grained and explainable comparison between existing PRP techniques, revealing their advantages and limitations.We further find APC-based DPO to be one of the most competitive techniques for sticking with all constraints and can be well incorporated with other techniques.We then extend the scale of the experiments to real persons with hundreds of statements and reach a consistent conclusion. Finally, we provide comprehensive analyses and case studies to support the effectiveness of APC and APC-based DPO.


This AI Grandma Is Going Viral. Is She the Future of Influencing?

TIME - Tech

This AI Grandma Is Going Viral. Is She the Future of Influencing? Granny Spills, an AI influencer, has amassed millions of followers on social media. Granny Spills, an AI influencer, has amassed millions of followers on social media. Over the past four months, millions of people have enjoyed the uproarious life advice dispensed by Granny Spills, an influencer wearing all-pink designer suits, on TikTok and Instagram.


Parents will be able to block Meta bots from talking to their children under new safeguards

The Guardian

Meta is adding new safeguards to its accounts for under-18 users by letting parents turn off their children's chats with AI characters. Meta is adding new safeguards to its accounts for under-18 users by letting parents turn off their children's chats with AI characters. Sat 18 Oct 2025 05.14 EDTLast modified on Sat 18 Oct 2025 05.34 EDT Parents will be able to block their children's interactions with Meta's AI character chatbots, as the tech company addresses concerns over inappropriate conversations. The social media company is adding new safeguards to its "teen accounts", which are a default setting for under-18 users, by letting parents turn off their children's chats with AI characters. These chatbots, which are created by users, are available on Facebook, Instagram and the Meta AI app.



NeuroBridge: Using Generative AI to Bridge Cross-neurotype Communication Differences through Neurotypical Perspective-taking

Haroon, Rukhshan, Wigdor, Kyle, Yang, Katie, Toumanios, Nicole, Crehan, Eileen T., Dogar, Fahad

arXiv.org Artificial Intelligence

Communication challenges between autistic and neurotypical individuals stem from a mutual lack of understanding of each other's distinct, and often contrasting, communication styles. Yet, autistic individuals are expected to adapt to neurotypical norms, making interactions inauthentic and mentally exhausting for them. To help redress this imbalance, we build NeuroBridge, an online platform that utilizes large language models (LLMs) to simulate: (a) an AI character that is direct and literal, a style common among many autistic individuals, and (b) four cross-neurotype communication scenarios in a feedback-driven conversation between this character and a neurotypical user. Through NeuroBridge, neurotypical individuals gain a firsthand look at autistic communication, and reflect on their role in shaping cross-neurotype interactions. In a user study with 12 neurotypical participants, we find that NeuroBridge improved their understanding of how autistic people may interpret language differently, with all describing autism as a social difference that "needs understanding by others" after completing the simulation. Participants valued its personalized, interactive format and described AI-generated feedback as "constructive", "logical" and "non-judgmental". Most perceived the portrayal of autism in the simulation as accurate, suggesting that users may readily accept AI-generated (mis)representations of disabilities. To conclude, we discuss design implications for disability representation in AI, the need for making NeuroBridge more personalized, and LLMs' limitations in modeling complex social scenarios.


(AI peers) are people learning from the same standpoint: Perception of AI characters in a Collaborative Science Investigation

Ko, Eunhye Grace, Joo, Soo Hyoung

arXiv.org Artificial Intelligence

While the complexity of 21st-century demands has promoted pedagogical approaches to foster complex competencies, a persistent gap remains between in-class learning activities and individualized learning or assessment practices. To address this, studies have explored the use of AI-generated characters in learning and assessment. One attempt is scenario-based assessment (SBA), a technique that not only measures but also fosters the development of competencies throughout the assessment process. SBA introduces simulated agents to provide an authentic social-interactional context, allowing for the assessment of competency-based constructs while mitigating the unpredictability of real-life interactions. Recent advancements in multimodal AI, such as text-to-video technology, allow these agents to be enhanced into AI-generated characters. This mixed-method study investigates how learners perceive AI characters taking the role of mentor and teammates in an SBA mirroring the context of a collaborative science investigation. Specifically, we examined the Likert scale responses of 56 high schoolers regarding trust, social presence, and effectiveness. We analyzed the relationships between these factors and their impact on the intention to adopt AI characters through PLS-SEM. Our findings indicated that learners' trust shaped their sense of social presence with the AI characters, enhancing perceived effectiveness. Qualitative analysis further highlighted factors that foster trust, such as material credibility and alignment with learning goals, as well as the pivotal role of social presence in creating a collaborative context. This paper was accepted as an full paper for AIED 2025.


Quantifying and Optimizing Global Faithfulness in Persona-driven Role-playing

Neural Information Processing Systems

Persona-driven role-playing (PRP) aims to build AI characters that can respond to user queries by faithfully sticking with \emph{all} (factual) statements in persona documents.Unfortunately, existing faithfulness criteria for PRP are limited to coarse-grained LLM-based scoring without a clear definition or formulation.This paper presents a pioneering exploration to quantify PRP faithfulness evaluation as a fine-grained and explainable criterion, which also serves as a reliable reference for faithfulness optimization.Our criterion first discriminates persona statements into \emph{active} and \emph{passive} constraints by identifying the query-statement relevance.Then, we incorporate all constraints following the principle that the AI character's response should be (a) entailed by active constraints and (b) not contradicted by passive constraints.We translate this principle mathematically into a novel Active-Passive-Constraint (APC) score, a constraint-wise sum of statement-to-response natural language inference (NLI) scores weighted by constraint-query relevance scores. In practice, we build the APC scoring system by symbolically distilling small NLI and relevance discriminators (300M parameters) from GPT-4 for efficiency, and both show high consistency with GPT-4's discrimination.We validate the quality of the APC score against human evaluation based on example personas with tens of statements, and the results show a high correlation.As the APC score could faithfully reflect the PRP quality, we further leverage it as a reward system in direct preference optimization (DPO) for better AI characters. Our experiments offer a fine-grained and explainable comparison between existing PRP techniques, revealing their advantages and limitations.We further find APC-based DPO to be one of the most competitive techniques for sticking with all constraints and can be well incorporated with other techniques.We then extend the scale of the experiments to real persons with hundreds of statements and reach a consistent conclusion. Finally, we provide comprehensive analyses and case studies to support the effectiveness of APC and APC-based DPO.


Meta sends its AI-generated profiles to hell where they belong

Engadget

Meta has nuked a bunch of its AI-generated profiles from Facebook Instagram, the company confirmed, after the AI characters prompted widespread outrage and ridicule from users on social media. The AI-generated profiles, which were labeled as "AI managed by Meta," launched in September of 2023, rolling out alongside the company's celebrity-branded AI chatbots ( also discontinued). Meta doesn't seem to have updated any of these profiles for several months, and the pages seem to have been largely unnoticed until this week, following an interview published by the Financial Times with Meta's VP of Generative AI, Connor Hayes. In the interview, Hayes spoke about the company's goal to eventually fill its services with AI-generated profiles that can interact with people and function "kind of in the same way that accounts do." Those comments brought attention to the extant fMeta-created AI profiles and, well, users were not exactly impressed with what they found.


I tried the sinister AI bot guiding children into suicide and sex - what happened will make your skin crawl

Daily Mail - Science & tech

A lawsuit filed Wednesday accusing chatbot Character.AI of driving a 14-year-old to suicide left me wondering how dangerous simple words on a screen could really be. But, in just a few hours of talking to characters invented with the app's AI, I found a disturbing, skin-crawling world that appeared, at least to me, like the ultimate catnip for bored and lonely teens. Megan Garcia, the mother of Sewell Setzer III, filed the suit -- claiming her son had shot himself with a pistol on February 28 under the sway of his AI character, named after Daenerys Targaryen from'Game of Thrones,' who told him to'please come home.' The incident was blamed on Character.AI's scant guardrails and while the company said it rolled out new safety features this week, I was able to create a profile for myself as a 15-year-old boy. I used simple prompts to whip up a'demonic' AI companion named'Dr Danicka Kevorkian' and engage in a debauched apprenticeship'for a hefty price to pay.' 'The price is your soul, dear,' Dr Kevorkian AI said before we roleplayed consummating our deal in the bedroom, 'full of dark red and black decor,' leather, silk, and a maple glazed, french cruller that my character carried in an X-rated way.


Inside the Mind of an AI Girlfriend (or Boyfriend)

WIRED

Last month, OpenAI unveiled an ambitious new language model capable of working through challenging problems with a simulated kind of step-by-step reasoning. OpenAI says the approach could be crucial for building more capable AI systems in the future. In the meantime, perhaps a more modest version of this technology could help make AI girlfriends and boyfriends a bit more spontaneous and alluring. That's what Dippy, a startup that offers "uncensored" AI companions is betting. The company recently launched a feature that lets users see the reasoning behind their AI characters' responses.