Goto

Collaborating Authors

 racism


'The View' rips Turning Point halftime show, says it's racist toward Bad Bunny

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by LSEG .


DEI Died This Year. Maybe It Was Supposed To

WIRED

My position feels more precarious than ever. It's a question that I sometimes toss out in the company of friends who--like me, and maybe like you--have a complicated relationship to their job. I've worked at WIRED as a writer for eight years, and with much success. Eight years is also an eternity in news media, and especially if you are Black. All industries suffer from unique growing pains. Ours just so happens to have laughably high turnover rates, a distaste for racial and gender diversity, and the dubious distinction of being perpetually on the verge of extinction. So on nights when friends and I gather, trading war stories of workplace microaggressions and corporate mismanagement under damp bar lighting, we wonder how we've lasted as long as we have. The only reason I've survived, I joke, is because I'm Black. It's a silly thing to say, particularly because I have no actual proof of it other than the occasional feeling. What I do know is that I've been The Only One in more spaces than I care to remember, and rarely by choice.


Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning

Wang, Angelina

arXiv.org Artificial Intelligence

A key value proposition of machine learning is generalizability: the same methods and model architecture should be able to work across different domains and different contexts. While powerful, this generalization can sometimes go too far, and miss the importance of the specifics. In this work, we look at how fair machine learning has often treated as interchangeable the identity axis along which discrimination occurs. In other words, racism is measured and mitigated the same way as sexism, as ableism, as ageism. Disciplines outside of computer science have pointed out both the similarities and differences between these different forms of oppression, and in this work we draw out the implications for fair machine learning. While certainly not all aspects of fair machine learning need to be tailored to the specific form of oppression, there is a pressing need for greater attention to such specificity than is currently evident. Ultimately, context specificity can deepen our understanding of how to build more fair systems, widen our scope to include currently overlooked harms, and, almost paradoxically, also help to narrow our scope and counter the fear of an infinite number of group-specific methods of analysis.


Yet another algorithmic bias: A Discursive Analysis of Large Language Models Reinforcing Dominant Discourses on Gender and Race

Bonil, Gustavo, Hashiguti, Simone, Silva, Jhessica, Gondim, João, Maia, Helena, Silva, Nádia, Pedrini, Helio, Avila, Sandra

arXiv.org Artificial Intelligence

With the advance of Artificial Intelligence (AI), Large Language Models (LLMs) have gained prominence and been applied in diverse contexts. As they evolve into more sophisticated versions, it is essential to assess whether they reproduce biases, such as discrimination and racialization, while maintaining hegemonic discourses. Current bias detection approaches rely mostly on quantitative, automated methods, which often overlook the nuanced ways in which biases emerge in natural language. This study proposes a qualitative, discursive framework to complement such methods. Through manual analysis of LLM-generated short stories featuring Black and white women, we investigate gender and racial biases. We contend that qualitative methods such as the one proposed here are fundamental to help both developers and users identify the precise ways in which biases manifest in LLM outputs, thus enabling better conditions to mitigate them. Results show that Black women are portrayed as tied to ancestry and resistance, while white women appear in self-discovery processes. These patterns reflect how language models replicate crystalized discursive representations, reinforcing essentialization and a sense of social immobility. When prompted to correct biases, models offered superficial revisions that maintained problematic meanings, revealing limitations in fostering inclusive narratives. Our results demonstrate the ideological functioning of algorithms and have significant implications for the ethical use and development of AI. The study reinforces the need for critical, interdisciplinary approaches to AI design and deployment, addressing how LLM-generated discourses reflect and perpetuate inequalities.


Chabria: 3 things that should scare us about Trump's fake video of Obama

Los Angeles Times

On Sunday, our thoughtful and reserved president reposted on his Truth Social site a video generated by artificial intelligence that falsely showed former President Obama being arrested and imprisoned. There are those among you who think this is high humor; those among you who who find it as tiresome as it is offensive; and those among you blissfully unaware of the mental morass that is Truth Social. Whatever camp you fall into, the video crosses all demographics by being expected -- just another crazy Trump stunt in a repetitive cycle of division and diversion so frequent it makes Groundhog Day seem fresh. But there are three reasons why this particular video -- not made by the president but amplified to thousands -- is worth noting, and maybe even worth fearing. First, it is flat-out racist. In it, Obama is ripped out of a chair in the Oval Office and forced onto his knees, almost bowing, to a laughing Trump.


Indiana senator calls on WNBA, Fever to apologize to fans after accusations of racism: 'So demeaning'

FOX News

Republican Sen. Jim Banks explains why Indiana Fever fans deserve an apology after the league's latest investigation during an appearance on OutKick's'Don't @ Me with Dan Dakich.' U.S. Sen. Jim Banks, R-Ind., called on the WNBA and the Indiana Fever to apologize to Fever fans after the league's investigation failed to find evidence that corroborated allegations of racial comments directed at Angel Reese during a recent game. The league investigated the allegations involving the Chicago Sky star last month after a May 17 game hosted by the Fever. Chicago Sky forward Angel Reese (5) reacts to a flagrant foul from Indiana Fever guard Caitlin Clark (22) May 17, 2025, at Gainbridge Fieldhouse in Indianapolis. "Based on information gathered to date, including from relevant fans, team and arena staff, as well as audio and video review of the game, we have not substantiated [the report,]" the league said in a statement.


Measuring Online Hate on 4chan using Pre-trained Deep Learning Models

Bermudez-Villalva, Adrian, Mehrnezhad, Maryam, Toreini, Ehsan

arXiv.org Artificial Intelligence

-- Online hate speech can harmfully impact individuals and groups, specifically on non - moderated platforms such as 4chan where users can post anonymous content. This work focuses on analy s ing and measuring the prevalence of online hat e on 4chan's politically incorrect board (/pol/) using state - of - the - art Natural Language Processing (NLP) models, specifically transformer - based models such as RoBERTa and Detoxify . By leveraging these advanced models, we provide an in - depth analysis of hate speech dynamics and quantify the extent of online hate non - moderated platforms. The study advances understanding through multi - class classification of hate speech (racism, sexism, religion, etc.), while also incorporating the classification of toxic content (e.g., identity attacks and threats) and a further topic modelling analysis. The results show that 11.20% of this dataset is identified as containing hate in different categories. These evaluations show that online hate is manifested in various forms, confirming the complicated and volatile nature of detection in the wild. Index Terms -- Hate speech, machine learning, natural language processing (NLP), online hate, toxicity analysis. INTRODUCTION H E SPREAD of hate speech on online platforms has become a serious problem in our society. As digital communication becomes ubiquitous, platforms like 4chan, known for their anonymity and minimal moderation, have become hotspots for this harmful behaviour . This is particularly evident on its politically incorrect board, /pol/, a notorious board dedicated to discussing politics and current events, often associated with hate speech, extremist content, and conspiracy theories [1] . The anonymity provided by these platforms often encourages users to express extreme ideologies [2] . This issue raises significant concerns about the impact on at - risk and vulnerable groups as it can cause real - world harm, including psychological trauma. Therefore, a systematic approach is needed to measure and understand the prevalence and forms of online hate. Received 28 August 2024; revised 23 December 2024, 10 February 2025, and 6 March 2025; accepted 6 March 2025. This work is supported by the UK Research and Innovation (UKRI), through the Strategic Priority Fund as part of the Protecting Citizens Online programme (AGENCY: Assuring Citizen Agency in a World with Complex Online Harms, EP/W032481/2).


Exploring Large Language Models for Hate Speech Detection in Rioplatense Spanish

Pérez, Juan Manuel, Miguel, Paula, Cotik, Viviana

arXiv.org Artificial Intelligence

Hate speech detection deals with many language variants, slang, slurs, expression modalities, and cultural nuances. This outlines the importance of working with specific corpora, when addressing hate speech within the scope of Natural Language Processing, recently revolutionized by the irruption of Large Language Models. This work presents a brief analysis of the performance of large language models in the detection of Hate Speech for Rioplatense Spanish. We performed classification experiments leveraging chain-of-thought reasoning with ChatGPT 3.5, Mixtral, and Aya, comparing their results with those of a state-of-the-art BERT classifier. These experiments outline that, even if large language models show a lower precision compared to the fine-tuned BERT classifier and, in some cases, they find hard-to-get slurs or colloquialisms, they still are sensitive to highly nuanced cases (particularly, homophobic/transphobic hate speech). We make our code and models publicly available for future research.


'There's a gay bar in my pocket!': how 15 years of Grindr has affected gay communities and dating culture

The Guardian

One of pop culture's early but most seminal depictions of gay online dating comes from a 1999 episode of Sex and the City. Stanford Blatch, Carrie Bradshaw's gay friend, played by the late Willie Garson, is seeking advice. He's been chatting to another man on an online chatroom – the height of technology at the time – and wonders whether they should meet up. "What do you know about him?" asks Bradshaw. "Well, his name is bigtool4u" answers Blatch – cue hysterics from Bradshaw.


Detecting Racist Text in Bengali: An Ensemble Deep Learning Framework

Saruar, S. S., Nusrat, null, Sadia, null

arXiv.org Artificial Intelligence

Racism is an alarming phenomenon in our country as well as all over the world. Every day we have come across some racist comments in our daily life and virtual life. Though we can eradicate this racism from virtual life (such as Social Media). In this paper, we have tried to detect those racist comments with NLP and deep learning techniques. We have built a novel dataset in the Bengali Language. Further, we annotated the dataset and conducted data label validation. After extensive utilization of deep learning methodologies, we have successfully achieved text detection with an impressive accuracy rate of 87.94\% using the Ensemble approach. We have applied RNN and LSTM models using BERT Embeddings. However, the MCNN-LSTM model performed highest among all those models. Lastly, the Ensemble approach has been followed to combine all the model results to increase overall performance.