dehumanization
Beyond the Explicit: A Bilingual Dataset for Dehumanization Detection in Social Media
Assenmacher, Dennis, Piot, Paloma, Laken, Katarina, Jurgens, David, Wagner, Claudia
Digital dehumanization, although a critical issue, remains largely overlooked within the field of computational linguistics and Natural Language Processing. The prevailing approach in current research concentrating primarily on a single aspect of dehumanization that identifies overtly negative statements as its core marker. This focus, while crucial for understanding harmful online communications, inadequately addresses the broader spectrum of dehumanization. Specifically, it overlooks the subtler forms of dehumanization that, despite not being overtly offensive, still perpetuate harmful biases against marginalized groups in online interactions. These subtler forms can insidiously reinforce negative stereotypes and biases without explicit offensiveness, making them harder to detect yet equally damaging. Recognizing this gap, we use different sampling methods to collect a theory-informed bilingual dataset from Twitter and Reddit. Using crowdworkers and experts to annotate 16,000 instances on a document- and span-level, we show that our dataset covers the different dimensions of dehumanization. This dataset serves as both a training resource for machine learning models and a benchmark for evaluating future dehumanization detection techniques. To demonstrate its effectiveness, we fine-tune ML models on this dataset, achieving performance that surpasses state-of-the-art models in zero and few-shot in-context settings.
- Europe > Austria > Vienna (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Michigan (0.04)
- (8 more...)
- Law Enforcement & Public Safety (0.68)
- Government > Regional Government (0.68)
- Health & Medicine > Therapeutic Area (0.67)
- Government > Immigration & Customs (0.46)
Hateful Meme Detection through Context-Sensitive Prompting and Fine-Grained Labeling
Ouyang, Rongxin, Jaidka, Kokil, Mukerjee, Subhayan, Cui, Guangyu
The prevalence of multi-modal content on social media complicates automated moderation strategies. This calls for an enhancement in multi-modal classification and a deeper understanding of understated meanings in images and memes. Although previous efforts have aimed at improving model performance through fine-tuning, few have explored an end-to-end optimization pipeline that accounts for modalities, prompting, labeling, and fine-tuning. In this study, we propose an end-to-end conceptual framework for model optimization in complex tasks. Experiments support the efficacy of this traditional yet novel framework, achieving the highest accuracy and AUROC. Ablation experiments demonstrate that isolated optimizations are not ineffective on their own.
- Law > Civil Rights & Constitutional Law (0.69)
- Law Enforcement & Public Safety > Terrorism (0.47)
Predicting Femicide in Veracruz: A Fuzzy Logic Approach with the Expanded MFM-FEM-VER-CP-2024 Model
Medel-Ramírez, Carlos, Medel-López, Hilario
The article focuses on the urgent issue of femicide in Veracruz, Mexico, and the development of the MFM_FEM_VER_CP_2024 model, a mathematical framework designed to predict femicide risk using fuzzy logic. This model addresses the complexity and uncertainty inherent in gender based violence by formalizing risk factors such as coercive control, dehumanization, and the cycle of violence. These factors are mathematically modeled through membership functions that assess the degree of risk associated with various conditions, including personal relationships and specific acts of violence. The study enhances the original model by incorporating new rules and refining existing membership functions, which significantly improve the model predictive accuracy.
- North America > Mexico > Veracruz (0.61)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > Canada (0.04)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.49)
Beyond Hate Speech: NLP's Challenges and Opportunities in Uncovering Dehumanizing Language
Zhang, Hezhao, Harris, Lasana, Moosavi, Nafise Sadat
Dehumanization, characterized as a subtle yet harmful manifestation of hate speech, involves denying individuals of their human qualities and often results in violence against marginalized groups. Despite significant progress in Natural Language Processing across various domains, its application in detecting dehumanizing language is limited, largely due to the scarcity of publicly available annotated data for this domain. This paper evaluates the performance of cutting-edge NLP models, including GPT-4, GPT-3.5, and LLAMA-2, in identifying dehumanizing language. Our findings reveal that while these models demonstrate potential, achieving a 70\% accuracy rate in distinguishing dehumanizing language from broader hate speech, they also display biases. They are over-sensitive in classifying other forms of hate speech as dehumanization for a specific subset of target groups, while more frequently failing to identify clear cases of dehumanization for other target groups. Moreover, leveraging one of the best-performing models, we automatically annotated a larger dataset for training more accessible models. However, our findings indicate that these models currently do not meet the high-quality data generation threshold necessary for this task.
- Europe > United Kingdom (0.46)
- Asia > China (0.28)
- North America > United States (0.04)
- (10 more...)
A Dataset for the Detection of Dehumanizing Language
Engelmann, Paul, Trolle, Peter Brunsgaard, Hardmeier, Christian
Dehumanization can range from and Haslam (2006), where a sample is considered blatant to subtle forms of varying degrees (Bain dehumanizing if it contains at least one of the following et al., 2009), making automated, general detection categories: negative evaluation of a target difficult. Mendelsohn et al. (2020) present one of group, denial of agency, moral disgust, animal the first computational works on dehumanization metaphors, objectification. Animal metaphors and through explicit feature engineering, using lexicon objectification specifically relate to a human being and word embedding based approaches to detect compared to an animal or object with the intent dehumanizing associations across several years in to cause harm. Trigger Warning: This paper contains a New York Times corpus. Outside of this, there is examples of hateful content that some may little computational work on dehumanization.
- Europe > Denmark > Capital Region > Copenhagen (0.05)
- North America > United States > Illinois (0.04)
- North America > United States > California (0.04)
On Measures of Biases and Harms in NLP
Dev, Sunipa, Sheng, Emily, Zhao, Jieyu, Amstutz, Aubrie, Sun, Jiao, Hou, Yu, Sanseverino, Mattie, Kim, Jiin, Nishi, Akihiro, Peng, Nanyun, Chang, Kai-Wei
Recent studies show that Natural Language Processing (NLP) technologies propagate societal biases about demographic groups associated with attributes such as gender, race, and nationality. To create interventions and mitigate these biases and associated harms, it is vital to be able to detect and measure such biases. While existing works propose bias evaluation and mitigation methods for various tasks, there remains a need to cohesively understand the biases and the specific harms they measure, and how different measures compare with each other. To address this gap, this work presents a practical framework of harms and a series of questions that practitioners can answer to guide the development of bias measures. As a validation of our framework and documentation questions, we also present several case studies of how existing bias measures in NLP -- both intrinsic measures of bias in representations and extrinsic measures of bias of downstream applications -- can be aligned with different harms and how our proposed documentation questions facilitates more holistic understanding of what bias measures are measuring.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (7 more...)
- Research Report (1.00)
- Overview (0.93)
How to Safeguard Humanity in a Context of Excessive Automation? - MedicalExpo e-Magazine
Jean-Michel Besnier is a French philosopher who teaches at Sorbonne University in Paris. His research focuses on the philosophical and ethical impact of science and technology on individual and collective representations and imagination. We met with him to talk about the consequences of the explosion of robotics and artificial intelligence (AI) in the healthcare sector, especially since the beginning of the Covid-19 pandemic. MedicalExpo e-magazine: Can you give us your definition of artificial intelligence? Jean-Michel Besnier: I have the same definition that everyone has. I am more attentive to the conceptual extension of the notion of artificial intelligence, which at the beginning referred to something rather simple, that is to say the implementation of devices capable of solving problems in an automatic or algorithmic way.
Why is AI mostly presented as female in pop culture and demos?
With the proliferation of female robots such as Sophia and the popularity of female virtual assistants such as Siri (Apple), Alexa (Amazon), and Cortana (Microsoft), artificial intelligence seems to have a gender issue. This gender imbalance in AI is a pervasive trend that has drawn sharp criticism in the media (even Unesco warned against the dangers of this practice) because it could reinforce stereotypes about women being objects. But why is femininity injected in artificial intelligent objects? If we want to curb the massive use of female gendering in AI, we need to better understand the deep roots of this phenomenon. In an article published in the journal Psychology & Marketing, we argue that research on what makes people human can provide a new perspective into why feminization is systematically used in AI.
How The Sensay Chatbot Is Providing Actual Human Connection
Chatbots have not seen the success in non-spamming application that we had once hoped. Microsoft's Tay experiment was a massive failure thanks to human interference. Microsoft is trying again with Zo, focusing on the future of chatbots being your best friend. That future is fluid interaction with a programmable and learning artificial intelligence without the assistance of humans. Conversely, In the case of Sensay, the future of bots is one that is just as human as a bot could possibly be.