Goto

Collaborating Authors

 misogyny


Tech firms must remove 'revenge porn' in 48 hours or risk being blocked, says Starmer

The Guardian

The prime minister, Keir Starmer, said the'burden of tackling abuse must no longer fall on victims' in an article written for the Guardian. The prime minister, Keir Starmer, said the'burden of tackling abuse must no longer fall on victims' in an article written for the Guardian. Tech firms must remove'revenge porn' in 48 hours or risk being blocked, says Starmer PM says measure, also applied to deepfake nudes, is needed owing to a'national emergency' of online misogyny Deepfake nudes and "revenge porn" must be removed from the internet within 48 hours or technology firms risk being blocked in the UK, Keir Starmer has said, calling it a "national emergency" that the government must confront. Companies could be fined millions or even blocked altogether if they allow the images to spread or be reposted after victims give notice. Amendments will be made to the crime and policing bill to also regulate AI chatbots such as X's Grok, which generated nonconsensual images of women in bikinis or in compromising positions until the government threatened action against Elon Musk's company .


Is AI the New Frontier of Women's Oppression?

WIRED

Is AI the New Frontier of Women's Oppression? In her new book, feminist author Laura Bates explores how sexbots, AI assistants, and deepfakes are reinventing misogyny and harming women. After spending her early twenties as a nanny in the UK, Laura Bates noticed that the young girls she was caring for were preoccupied by their bodies, spurred on by the marketing they were receiving. In 2012, Bates, a London-based feminist author and activist, started The Everyday Sexism Project, a website dedicated to documenting and combatting sexism, misogyny, and gendered violence around the world by highlighting insidious instances of it such as invisible labor, referring to women as girls and commenting on their attire in professional settings. The site was turned into a book in 2014.


Co-AttenDWG: Co-Attentive Dimension-Wise Gating and Expert Fusion for Multi-Modal Offensive Content Detection

Hossain, Md. Mithun, Hossain, Md. Shakil, Chaki, Sudipto, Mridha, M. F.

arXiv.org Artificial Intelligence

Multi-modal learning has emerged as a crucial research direction, as integrating textual and visual information can substantially enhance performance in tasks such as classification, retrieval, and scene understanding. Despite advances with large pre-trained models, existing approaches often suffer from insufficient cross-modal interactions and rigid fusion strategies, failing to fully harness the complementary strengths of different modalities. To address these limitations, we propose Co-AttenDWG, co-attention with dimension-wise gating, and expert fusion. Our approach first projects textual and visual features into a shared embedding space, where a dedicated co-attention mechanism enables simultaneous, fine-grained interactions between modalities. This is further strengthened by a dimension-wise gating network, which adaptively modulates feature contributions at the channel level to emphasize salient information. In parallel, dual-path encoders independently refine modality-specific representations, while an additional cross-attention layer aligns the modalities further. The resulting features are aggregated via an expert fusion module that integrates learned gating and self-attention, yielding a robust unified representation. Experimental results on the MIMIC and SemEval Memotion 1.0 datasets show that Co-AttenDWG achieves state-of-the-art performance and superior cross-modal alignment, highlighting its effectiveness for diverse multi-modal applications.


The Controversy Over Netflix's Megahit New Show Is Even More Intense Here in the U.K.

Slate

It sometimes happens that a random British TV show will suddenly shoot to enormous, worldwide acclaim without a big publicity campaign to push it there, instead driven primarily by word of mouth. The best example of this is 2024's Baby Reindeer, which became a hit and sparked real-life twists and turns to rival those within the series itself. The latest example, Adolescence, has seen success on a different scale, though. The four-part drama, about a 13-year-old boy named Jamie who is arrested for murdering a girl at his school, became one of Netflix's most popular series of all time--beating out Stranger Things Season 3--within just the first 17 days of its release. Why is everyone watching this show?


BiaSWE: An Expert Annotated Dataset for Misogyny Detection in Swedish

Kukk, Kätriin, Petrelli, Danila, Casademont, Judit, Orlowski, Eric J. W., Dzieliński, Michał, Jacobson, Maria

arXiv.org Artificial Intelligence

In this study, we introduce the process for creating BiaSWE, an expert-annotated dataset tailored for misogyny detection in the Swedish language. To address the cultural and linguistic specificity of misogyny in Swedish, we collaborated with experts from the social sciences and humanities. Our interdisciplinary team developed a rigorous annotation process, incorporating both domain knowledge and language expertise, to capture the nuances of misogyny in a Swedish context. This methodology ensures that the dataset is not only culturally relevant but also aligned with broader efforts in bias detection for low-resource languages. The dataset, along with the annotation guidelines, is publicly available for further research.


Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts

Muti, Arianna, Ruggeri, Federico, Al-Khatib, Khalid, Barrón-Cedeño, Alberto, Caselli, Tommaso

arXiv.org Artificial Intelligence

We propose misogyny detection as an Argumentative Reasoning task and we investigate the capacity of large language models (LLMs) to understand the implicit reasoning used to convey misogyny in both Italian and English. The central aim is to generate the missing reasoning link between a message and the implied meanings encoding the misogyny. Our study uses argumentation theory as a foundation to form a collection of prompts in both zero-shot and few-shot settings. These prompts integrate different techniques, including chain-of-thought reasoning and augmented knowledge. Our findings show that LLMs fall short on reasoning capabilities about misogynistic comments and that they mostly rely on their implicit knowledge derived from internalized common stereotypes about women to generate implied assumptions, rather than on inductive reasoning.


Covert Bias: The Severity of Social Views' Unalignment in Language Models Towards Implicit and Explicit Opinion

Aldayel, Abeer, Alokaili, Areej, Alahmadi, Rehab

arXiv.org Artificial Intelligence

While various approaches have recently been studied for bias identification, little is known about how implicit language that does not explicitly convey a viewpoint affects bias amplification in large language models. To examine the severity of bias toward a view, we evaluated the performance of two downstream tasks where the implicit and explicit knowledge of social groups were used. First, we present a stress test evaluation by using a biased model in edge cases of excessive bias scenarios. Then, we evaluate how LLMs calibrate linguistically in response to both implicit and explicit opinions when they are aligned with conflicting viewpoints. Our findings reveal a discrepancy in LLM performance in identifying implicit and explicit opinions, with a general tendency of bias toward explicit opinions of opposing stances. Moreover, the bias-aligned models generate more cautious responses using uncertainty phrases compared to the unaligned (zero-shot) base models. The direct, incautious responses of the unaligned models suggest a need for further refinement of decisiveness by incorporating uncertainty markers to enhance their reliability, especially on socially nuanced topics with high subjectivity.


A multitask learning framework for leveraging subjectivity of annotators to identify misogyny

Angel, Jason, Aroyehun, Segun Taofeek, Sidorov, Grigori, Gelbukh, Alexander

arXiv.org Artificial Intelligence

Identifying misogyny using artificial intelligence is a form of combating online toxicity against women. However, the subjective nature of interpreting misogyny poses a significant challenge to model the phenomenon. In this paper, we propose a multitask learning approach that leverages the subjectivity of this task to enhance the performance of the misogyny identification systems. We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups, and conducted extensive experiments and error analysis using two language models to validate our four alternative designs of the multitask learning technique to identify misogynistic content in English tweets. The results demonstrate that incorporating various viewpoints enhances the language models' ability to interpret different forms of misogyny. This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.


"Annie Bot" and "Loneliness & Company," Reviewed

The New Yorker

Last month, a new dating app called Volar launched in New York City, with the promise "We go on blind dates. So you don't have to." To sign up, you enter your name and phone number, then submit yourself to a brief interview with a chatbot matchmaker. When I made an account, Volar's bot asked what line of work I was in. "I'm a book critic," I replied.


Can Interpretability Layouts Influence Human Perception of Offensive Sentences?

Santos, Thiago Freitas dos, Osman, Nardine, Schorlemmer, Marco

arXiv.org Artificial Intelligence

This paper conducts a user study to assess whether three machine learning (ML) interpretability layouts can influence participants' views when evaluating sentences containing hate speech, focusing on the "Misogyny" and "Racism" classes. Given the existence of divergent conclusions in the literature, we provide empirical evidence on using ML interpretability in online communities through statistical and qualitative analyses of questionnaire responses. The Generalized Additive Model estimates participants' ratings, incorporating within-subject and between-subject designs. While our statistical analysis indicates that none of the interpretability layouts significantly influences participants' views, our qualitative analysis demonstrates the advantages of ML interpretability: 1) triggering participants to provide corrective feedback in case of discrepancies between their views and the model, and 2) providing insights to evaluate a model's behavior beyond traditional performance metrics.