oppression
Assessing Historical Structural Oppression Worldwide via Rule-Guided Prompting of Large Language Models
Chatterjee, Sreejato, Tran, Linh, Nguyen, Quoc Duy, Kirson, Roni, Hamlin, Drue, Aquino, Harvest, Lyu, Hanjia, Luo, Jiebo, Dye, Timothy
Abstract--Traditional efforts to measure historical structural oppression struggle with cross-national validity due to the unique, locally specified histories of exclusion, colonization, and social status in each country, and often have relied on structured indices that privilege material resources while overlooking lived, identity-based exclusion. We introduce a novel framework for oppression measurement that leverages Large Language Models (LLMs) to generate context-sensitive scores of lived historical disadvantage across diverse geopolitical settings. Using unstructured self-identified ethnicity utterances from a multilingual COVID-19 global study, we design rule-guided prompting strategies that encourage models to produce interpretable, theoretically grounded estimations of oppression. We systematically evaluate these strategies across multiple state-of-the-art LLMs. Our results demonstrate that LLMs, when guided by explicit rules, can capture nuanced forms of identity-based historical oppression within nations. This approach provides a complementary measurement tool that highlights dimensions of systemic exclusion, offering a scalable, cross-cultural lens for understanding how oppression manifests in data-driven research and public health contexts. The study of racial and ethnic inequality remains central to sociological research, with extensive research documenting how structural oppression is reproduced in historical and contemporary contexts [1]-[3]. Oppression can be understood as a social hierarchy in which some groups subject other groups to lower status and to systemic exclusion, dehumanization, and disadvantage. In public health and sociology, this oppression is closely aligned with definitions of systemic and structural racism, which describe racism as deeply embedded in laws, policies, institutional practices, and social norms that sustain widespread inequities, violence, and disadvantage over time [1]. Foundational works have demonstrated how ethnic and national hierarchies shape access to power, life opportunities, autonomy, and sovereignty, for example, primarily through institutionalized mechanisms such as legal structures, educational systems, and healthcare access, among others [2].
- South America > Brazil (0.05)
- North America > United States > New York > Monroe County > Rochester (0.05)
- Africa > Middle East > Algeria (0.05)
- (13 more...)
Liberals are catalysts to catastrophe, again
Yoav Litvin is an Israeli-American doctor of psychology/neuroscience, a writer and photographer. On September 17, the late-night talk show host Jimmy Kimmel was suspended after remarks he made about the death of right-wing activist Charlie Kirk. Days later, he was reinstated following liberal upheaval. In his first appearance back on air, Kimmel read US President Donald Trump's post on Truth Social: "I can't believe ABC fake news gave Jimmy Kimmel his job back." Without missing a beat, Kimmel responded, "You can't believe they gave me my job back. I can't believe we gave you your job back!"
- North America > United States (1.00)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.06)
- Asia > Middle East > Israel (0.06)
- (7 more...)
- Media (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Law > Civil Rights & Constitutional Law (0.98)
Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning
A key value proposition of machine learning is generalizability: the same methods and model architecture should be able to work across different domains and different contexts. While powerful, this generalization can sometimes go too far, and miss the importance of the specifics. In this work, we look at how fair machine learning has often treated as interchangeable the identity axis along which discrimination occurs. In other words, racism is measured and mitigated the same way as sexism, as ableism, as ageism. Disciplines outside of computer science have pointed out both the similarities and differences between these different forms of oppression, and in this work we draw out the implications for fair machine learning. While certainly not all aspects of fair machine learning need to be tailored to the specific form of oppression, there is a pressing need for greater attention to such specificity than is currently evident. Ultimately, context specificity can deepen our understanding of how to build more fair systems, widen our scope to include currently overlooked harms, and, almost paradoxically, also help to narrow our scope and counter the fear of an infinite number of group-specific methods of analysis.
- Europe > Greece > Attica > Athens (0.05)
- North America > United States > Alaska (0.04)
- Asia > India (0.04)
- (6 more...)
- Law > Labor & Employment Law (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology (0.93)
- (3 more...)
Is AI the New Frontier of Women's Oppression?
Is AI the New Frontier of Women's Oppression? In her new book, feminist author Laura Bates explores how sexbots, AI assistants, and deepfakes are reinventing misogyny and harming women. After spending her early twenties as a nanny in the UK, Laura Bates noticed that the young girls she was caring for were preoccupied by their bodies, spurred on by the marketing they were receiving. In 2012, Bates, a London-based feminist author and activist, started The Everyday Sexism Project, a website dedicated to documenting and combatting sexism, misogyny, and gendered violence around the world by highlighting insidious instances of it such as invisible labor, referring to women as girls and commenting on their attire in professional settings. The site was turned into a book in 2014.
- Europe > United Kingdom (0.35)
- Oceania > Australia (0.05)
- South America (0.04)
- (5 more...)
- Law > Criminal Law (0.67)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.67)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.70)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
A Feminist Account of Intersectional Algorithmic Fairness
Mirsch, Marie, Wegner, Laila, Strube, Jonas, Leicht-Scholten, Carmen
Intersectionality has profoundly influenced research and political action by revealing how interconnected systems of privilege and oppression influence lived experiences, yet its integration into algorithmic fairness research remains limited. Existing approaches often rely on single - axis or formal subgroup frameworks that risk oversimplifying social realities and neglecting structural inequalities. We propose Substantive Intersectional Algorithmic Fairness, extending Green's (2022) notion of substantive algorithmic fairness with insights from intersectional feminist theory. Buil ding on this foundation, we introduce ten desiderata within the ROOF methodology to guide the design, assessment, and deployment of algorithmic systems in ways that address systemic inequities while mitigating harms to intersectionally marginalized communi ties . Rather than prescribing fixed operationalizations, these desiderata encourage reflection on assumptions of neutrality, the use of protect ed attributes, the inclusion of multiply marginalized groups, and enhancing algorithmic systems' potential. Our a pproach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non - deployment may be necessary. By bridging computational and social science perspectives, we provide actionable guidance for more equitable, incl usive, and context - sensitive intersectional algorithmic practices.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- (22 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Education (0.67)
- Government > Regional Government > North America Government > United States Government (0.46)
- Information Technology > Data Science (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.67)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.46)
Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups
This paper explores the intricate relationship between capitalism, racial injustice, and artificial intelligence (AI), arguing that AI acts as a contemporary vehicle for age-old forms of exploitation. By linking historical patterns of racial and economic oppression with current AI practices, this study illustrates how modern technology perpetuates and deepens societal inequalities. It specifically examines how AI is implicated in the exploitation of marginalized communities through underpaid labor in the gig economy, the perpetuation of biases in algorithmic decision-making, and the reinforcement of systemic barriers that prevent these groups from benefiting equitably from technological advances. Furthermore, the paper discusses the role of AI in extending and intensifying the social, economic, and psychological burdens faced by these communities, highlighting the problematic use of AI in surveillance, law enforcement, and mental health contexts. The analysis concludes with a call for transformative changes in how AI is developed and deployed. Advocating for a reevaluation of the values driving AI innovation, the paper promotes an approach that integrates social justice and equity into the core of technological design and policy. This shift is crucial for ensuring that AI serves as a tool for societal improvement, fostering empowerment and healing rather than deepening existing divides.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > West Sussex (0.04)
- (6 more...)
- Government (1.00)
- Law > Civil Rights & Constitutional Law (0.90)
- Banking & Finance > Economy (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.34)
Do Not Fear the Robot Uprising. Join It
Our society has interpreted the sudden, dizzying rise of this new chatbot generation through the pop cultural lens of our youth. With it comes the sense that the straightforward "robots will kill us all" stories were prescient (or at least accurately captured the current vibe), and that there was a staggering naivete in the more forgiving "AI civil rights" narratives--famously epitomized by Star Trek's Commander Data, an android who fought to be treated the same as his organic Starfleet colleagues. Patrick Stewart's Captain Picard, defending Data in a trial to prove his sapience, thundered, "Your honor, Starfleet was founded to seek out new life: Well, there it sits! But far from being a relic of a bygone, more optimistic age, the AI civil rights narrative is more relevant than ever. It just needs to be understood in its proper context.
PACO: Provocation Involving Action, Culture, and Oppression
Garg, Vaibhav, Xu, Ganning, Singh, Munindar P.
In India, people identify with a particular group based on certain attributes such as religion. The same religious groups are often provoked against each other. Previous studies show the role of provocation in increasing tensions between India's two prominent religious groups: Hindus and Muslims. With the advent of the Internet, such provocation also surfaced on social media platforms such as WhatsApp. By leveraging an existing dataset of Indian WhatsApp posts, we identified three categories of provoking sentences against Indian Muslims. Further, we labeled 7,000 sentences for three provocation categories and called this dataset PACO. We leveraged PACO to train a model that can identify provoking sentences from a WhatsApp post. Our best model is fine-tuned RoBERTa and achieved a 0.851 average AUC score over five-fold cross-validation. Automatically identifying provoking sentences could stop provoking text from reaching out to the masses, and can prevent possible discrimination or violence against the target religious group. Further, we studied the provocative speech through a pragmatic lens, by identifying the dialog acts and impoliteness super-strategies used against the religious group.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (10 more...)
- Information Technology > Services (0.76)
- Law Enforcement & Public Safety (0.68)
Inside Safe City, Moscow's AI Surveillance Dystopia
Sergey Vyborov was on his way to the Moscow Metro's Aeroport station last September when police officers stopped him. The 49-year-old knew that taking the metro could spell trouble. During a protest against Russia's invasion of Ukraine, police had fingerprinted and photographed him. He'd already been detained four times in 2022. But he was rushing to his daughter's birthday, so he took a chance.
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.67)
- Asia > Russia (0.44)
- Europe > Ukraine (0.30)
A Holistic Framework for Analyzing the COVID-19 Vaccine Debate
Pacheco, Maria Leonor, Islam, Tunazzina, Mahajan, Monal, Shor, Andrey, Yin, Ming, Ungar, Lyle, Goldwasser, Dan
The Covid-19 pandemic has led to infodemic of low quality information leading to poor health decisions. Combating the outcomes of this infodemic is not only a question of identifying false claims, but also reasoning about the decisions individuals make. In this work we propose a holistic analysis framework connecting stance and reason analysis, and fine-grained entity level moral sentiment analysis. We study how to model the dependencies between the different level of analysis and incorporate human insights into the learning process. Experiments show that our framework provides reliable predictions even in the low-supervision settings.
- Europe > Poland (0.04)
- Oceania > Australia (0.04)
- North America > United States > Virginia (0.04)
- (8 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (0.66)
- Information Technology > Artificial Intelligence > Natural Language > Information Extraction (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)