Goto

Collaborating Authors

 racial identity


A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas

Venkit, Pranav Narayanan, Li, Jiayi, Zhou, Yingfan, Rajtmajer, Sarah, Wilson, Shomir

arXiv.org Artificial Intelligence

As LLMs (large language models) are increasingly used to generate synthetic personas--particularly in data-limited domains such as health, privacy, and HCI--it becomes necessary to understand how these narratives represent identity, especially that of minority communities. In this paper, we audit synthetic personas generated by 3 LLMs (GPT4o, Gemini 1.5 Pro, Deepseek v2.5) through the lens of representational harm, focusing specifically on racial identity. Using a mixed-methods approach combining close reading, lexical analysis, and a parameterized creativity framework, we compare 1,512 LLM-generated persona to human-authored responses. Our findings reveal that LLMs disproportionately foreground racial markers, overproduce culturally coded language, and construct personas that are syntactically elaborate yet nar-ratively reductive. These patterns result in a range of so-ciotechnical harms--including stereotyping, exoticism, erasure, and benevolent bias--that are often obfuscated by superficially positive narrations. We formalize this phenomenon as algorithmic othering, where minoritized identities are rendered hypervisible but less authentic. Based on these findings, we offer design recommendations for narrative-aware evaluation metrics and community-centered validation protocols for synthetic identity generation.


Stereotype or Personalization? User Identity Biases Chatbot Recommendations

Kantharuban, Anjali, Milbauer, Jeremiah, Strubell, Emma, Neubig, Graham

arXiv.org Artificial Intelligence

We demonstrate that when people use large language models (LLMs) to generate recommendations, the LLMs produce responses that reflect both what the user wants and who the user is. While personalized recommendations are often desired by users, it can be difficult in practice to distinguish cases of bias from cases of personalization: we find that models generate racially stereotypical recommendations regardless of whether the user revealed their identity intentionally through explicit indications or unintentionally through implicit cues. We argue that chatbots ought to transparently indicate when recommendations are influenced by a user's revealed identity characteristics, but observe that they currently fail to do so. Our experiments show that even though a user's revealed identity significantly influences model recommendations (p < 0.001), model responses obfuscate this fact in response to user queries. This bias and lack of transparency occurs consistently across multiple popular consumer LLMs (gpt-4o-mini, gpt-4-turbo, llama-3-70B, and claude-3.5) and for four American racial groups.


Artificial intelligence can determine racial identity from medical images

#artificialintelligence

The study was published in The Lancet. The researchers realized that their study uncovered the possibility that AI could have a predisposition towards race. Although AI is used in medicine to diagnose illnesses with human-like reasoning and intelligence, the notion of a simulated machine having bias is concerning for researchers. They realize the pros and cons in creating AI that is so close to human intelligence. It can both transform health care, while also showing unintentional bias through its programming.


Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Glocker, Ben, Winzeck, Stefan

arXiv.org Artificial Intelligence

It has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. A machine learning model may pick up undesirable correlations, for example, between a patient's racial identity and clinical outcome. Such correlations are often present in (historical) data used for model development. There has been an increase in studies reporting biases in disease detection models across patient subgroups. Besides the scarcity of data from underserved populations, very little is known about how these biases are encoded and how one may reduce or even remove disparate performance. There is some speculation whether algorithms may recognize patient characteristics such as biological sex or racial identity, and then directly or indirectly use this information when making predictions. But it remains unclear how we can establish whether such information is actually used. This article aims to shed some light on these issues by exploring new methodology allowing intuitive inspections of the inner working of machine learning models for image-based detection of disease. We also evaluate an effective yet debatable technique for addressing disparities leveraging the automatic prediction of patient characteristics, resulting in models with comparable true and false positive rates across subgroups. Our findings may stimulate the discussion about safe and ethical use of AI.


AI Makes Strangely Accurate Predictions From Blurry Medical Scans, Alarming Researchers

#artificialintelligence

New research has found that artificial intelligence (AI) analyzing medical scans can identify the race of patients with an astonishing degree of accuracy, while their human counterparts cannot. With the Food and Drug Administration (FDA) approving more algorithms for medical use, the researchers are concerned that AI could end up perpetuating racial biases. They are especially concerned that they could not figure out precisely how the machine-learning models were able to identify race, even from heavily corrupted and low-resolution images. In the study, published on pre-print service Arxiv, an international team of doctors investigated how deep learning models can detect race from medical images. Using private and public chest scans and self-reported data on race and ethnicity, they first assessed how accurate the algorithms were, before investigating the mechanism.


These Algorithms Look at X-Rays--and Somehow Detect Your Race

WIRED

Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don't look for on such scans: a patient's race. The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren't sure what cues the algorithms they created use to predict a person's race. Evidence that algorithms can read race from a person's medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms.


Whiteness of AI erases people of color from our 'imagined futures', researchers argue

#artificialintelligence

The overwhelming'Whiteness' of artificial intelligence--from stock images and cinematic robots to the dialects of virtual assistants--removes people of colour from the way humanity thinks about its technology-enhanced future. This is according to experts at the University of Cambridge, who suggest that current portrayals and stereotypes about AI risk creating a "racially homogenous" workforce of aspiring technologists, building machines with bias baked into their algorithms. They argue that cultural depictions of AI as White need to be challenged, as they do not offer a "post-racial" future but rather one from which people of colour are simply erased. The researchers, from Cambridge's Leverhulme Centre for the Future of Intelligence (CFI), say that AI, like other science fiction tropes, has always reflected the racial thinking in our society. They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials--from the "orientalised" alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.


'Whiteness' of A.I. could 'exacerbate racial inequality' says Cambridge Uni

#artificialintelligence

Artificial intelligence (AI) has a'whiteness' that stops humanity associating people of colour with our technologically-advanced future. That's the message from a team of researchers who believe this will worsen racial inequality over time. University of Cambridge experts suggest current portrayals and stereotypes about AI risk creating a'racially homogeneous' workforce of aspiring technologists, creating machines with bias baked into their algorithms. The scientists say cultural depictions of AI as white need to be challenged, as they do not offer a'post-racial' future but rather one from which people of colour are simply erased. According to the researchers from Cambridge's Leverhulme Centre for the Future of Intelligence (CFI), like other science fiction tropes, AI has always reflected racial thinking in society.


'White' artificial intelligence risks exacerbating racial inequality, study suggests

#artificialintelligence

The "whiteness" of artificial intelligence (AI) risks a "racially homogenous" workforce as humans create machines skewed by their biases, a study suggests. The University of Cambridge study examined AI in society, including in films, Google searches, stock images and robot voices. Researchers suggested machines have distinct racial identities and this perpetuates "real world" racial stereotypes. Non-abstract AI in internet search engine results usually had either Caucasian features or were the colour white, according to the researchers. Most virtual voices in devices talked in "standard white middle-class English" as "ideas of adding black dialects have been dismissed as too controversial or outside the target market," the study concluded.


'Racist' artificial intelligence is 'painting world white'

#artificialintelligence

Dr Kanta Dihal, who leads the centre's decolonising artificial intelligence initiative, said: "Given that society has, for centuries, promoted the association of intelligence with white Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine, it imagines a white machine. People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. "In cases where these systems are racialised as white, that could have dangerous consequences for humans that are not." The experts looked at recent research from a range of fields, including human-computer interaction and critical race theory, to demonstrate that machines can be racialised, and that this perpetuates "real world" racial biases. This includes work on how robots are seen to have distinct racial identities, with black robots receiving more online abuse, and a study showing people feel closer to virtual agents when they perceive shared racial identity. Dr Dihal said: "One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard white middle-class English.