objectification
Co-AttenDWG: Co-Attentive Dimension-Wise Gating and Expert Fusion for Multi-Modal Offensive Content Detection
Hossain, Md. Mithun, Hossain, Md. Shakil, Chaki, Sudipto, Mridha, M. F.
Multi-modal learning has emerged as a crucial research direction, as integrating textual and visual information can substantially enhance performance in tasks such as classification, retrieval, and scene understanding. Despite advances with large pre-trained models, existing approaches often suffer from insufficient cross-modal interactions and rigid fusion strategies, failing to fully harness the complementary strengths of different modalities. To address these limitations, we propose Co-AttenDWG, co-attention with dimension-wise gating, and expert fusion. Our approach first projects textual and visual features into a shared embedding space, where a dedicated co-attention mechanism enables simultaneous, fine-grained interactions between modalities. This is further strengthened by a dimension-wise gating network, which adaptively modulates feature contributions at the channel level to emphasize salient information. In parallel, dual-path encoders independently refine modality-specific representations, while an additional cross-attention layer aligns the modalities further. The resulting features are aggregated via an expert fusion module that integrates learned gating and self-attention, yielding a robust unified representation. Experimental results on the MIMIC and SemEval Memotion 1.0 datasets show that Co-AttenDWG achieves state-of-the-art performance and superior cross-modal alignment, highlighting its effectiveness for diverse multi-modal applications.
Psychological Effect of AI driven marketing tools for beauty/facial feature enhancement
Agrawal, Ayushi, Kondai, Aditya, Vemuri, Kavita
AI-powered facial assessment tools are reshaping how individuals evaluate appearance and internalize social judgments. This study examines the psychological impact of such tools on self-objectification, self-esteem, and emotional responses, with attention to gender differences. Two samples used distinct versions of a facial analysis tool: one overtly critical (N=75; M=22.9 years), and another more neutral (N=51; M=19.9 years). Participants completed validated self-objectification and self-esteem scales and custom items measuring emotion, digital/physical appearance enhancement (DAE, PAEE), and perceived social emotion (PSE). Results revealed consistent links between high self-objectification, low self-esteem, and increased appearance enhancement behaviors across both versions. Despite softer framing, the newer tool still evoked negative emotional responses (U=1466.5, p=0.013), indicating implicit feedback may reinforce appearance-related insecurities. Gender differences emerged in DAE (p=0.025) and PSE (p<0.001), with females more prone to digital enhancement and less likely to perceive emotional impact in others. These findings reveal how AI tools may unintentionally reinforce and amplify existing social biases and underscore the critical need for responsible AI design and development. Future research will investigate how human ideologies embedded in the training data of such tools shape their evaluative outputs, and how these, in turn, influence user attitudes and decisions.
- North America > United States > North Dakota (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Asia > China (0.05)
UK watchdog bans 'shocking' ads in mobile games that objectified women
An investigation by the UK advertising watchdog has found a number of shocking ads in mobile gaming apps that depict women as sexual objects, use pornographic tropes, and feature non-consensual sexual scenarios involving "violent and coercive control". The Advertising Standards Authority (ASA) used avatars, which mimic the browsing behaviour of different gender and age groups, to monitor ads served when mobile games are open and identify breaches of the UK code. While most of the thousands of promotions served to the avatars complied with UK rules, the watchdog identified and banned eight that featured "shocking" content that portrayed women in a harmful way. Two ads promoting an artificial intelligence chatbot app, Linky: Chat With Characters AI, began with a woman dressed in a manga T-shirt, a short skirt and large bunny ears dancing in a bedroom with text reading: "Tell me which bf [boyfriend] I should break up with." The ad moved on to animated content featuring text conversations with three manga-style young men.
Application of integrated gradients explainability to sociopsychological semantic markers
Aghababaei, Ali, Nikadon, Jan, Formanowicz, Magdalena, Bettinsoli, Maria Laura, Cervone, Carmen, Suitner, Caterina, Erseghe, Tomaso
Classification of textual data in terms of sentiment, or more nuanced sociopsychological markers (e.g., agency), is now a popular approach commonly applied at the sentence level. In this paper, we exploit the integrated gradient (IG) method to capture the classification output at the word level, revealing which words actually contribute to the classification process. This approach improves explainability and provides in-depth insights into the text. We focus on sociopsychological markers beyond sentiment and investigate how to effectively train IG in agency, one of the very few markers for which a verified deep learning classifier, BERTAgent, is currently available. Performance and system parameters are carefully tested, alternatives to the IG approach are evaluated, and the usefulness of the result is verified in a relevant application scenario. The method is also applied in a scenario where only a small labeled dataset is available, with the aim of exploiting IG to identify the salient words that contribute to building the different classes that relate to relevant sociopsychological markers. To achieve this, an uncommon training procedure that encourages overfitting is employed to enhance the distinctiveness of each class. The results are analyzed through the lens of social psychology, offering valuable insights.
- North America > United States (0.14)
- Europe > Poland (0.14)
- Europe > Middle East > Malta (0.14)
Beats of Bias: Analyzing Lyrics with Topic Modeling and Gender Bias Measurements
Chen, Danqing, Satish, Adithi, Khanbayov, Rasul, Schuster, Carolin M., Groh, Georg
This paper uses topic modeling and bias measurement techniques to analyze and determine gender bias in English song lyrics. We utilize BERTopic to cluster 537,553 English songs into distinct topics and chart their development over time. Our analysis shows the thematic shift in song lyrics over the years, from themes of romance to the increasing sexualization of women in songs. We observe large amounts of profanity and misogynistic lyrics on various topics, especially in the overall biggest cluster. Furthermore, to analyze gender bias across topics and genres, we employ the Single Category Word Embedding Association Test (SC-WEAT) to compute bias scores for the word embeddings trained on the most popular topics as well as for each genre. We find that words related to intelligence and strength tend to show a male bias across genres, as opposed to appearance and weakness words, which are more female-biased; however, a closer look also reveals differences in biases across topics.
- South America > Uruguay > Maldonado > Maldonado (0.04)
- North America > United States > Michigan (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
Reflecting the Male Gaze: Quantifying Female Objectification in 19th and 20th Century Novels
Luo, Kexin, Mao, Yue, Zhang, Bei, Hao, Sophie
Inspired by the concept of the male gaze (Mulvey, 1975) in literature and media studies, this paper proposes a framework for analyzing gender bias in terms of female objectification: the extent to which a text portrays female individuals as objects of visual pleasure. Our framework measures female objectification along two axes. First, we compute an agency bias score that indicates whether male entities are more likely to appear in the text as grammatical agents than female entities. Next, by analyzing the word embedding space induced by a text (Caliskan et al., 2017), we compute an appearance bias score that indicates whether female entities are more closely associated with appearance-related words than male entities. Applying our framework to 19th and 20th century novels reveals evidence of female objectification in literature: we find that novels written from a male perspective systematically objectify female characters, while novels written from a female perspective do not exhibit statistically significant objectification of any gender.
- North America > United States > New York > New York County > New York City (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Oceania > Australia (0.04)
- (12 more...)
AI girlfriends are here – but there's a dark side to virtual companions Arwa Mahdawi
It is a truth universally acknowledged, that a single man in possession of a computer must be in want of an AI girlfriend. Certainly a lot of enterprising individuals seem to think there's a lucrative market for digital romance. OpenAI recently launched its GPT Store, where paid ChatGPT users can buy and sell customized chatbots (think Apple's app store, but for chatbots) – and the offerings include a large selection of digital girlfriends. "AI girlfriend bots are already flooding OpenAI's GPT store," a headline from Quartz, who first reported on the issue, blared on Thursday. Quartz went on to note that "the AI girlfriend bots go against OpenAI's usage policy … The company bans GPTs'dedicated to fostering romantic companionship or performing regulated activities'."
AI girlfriends are here – but there's a dark side to virtual companions Arwa Mahdawi
It is a truth universally acknowledged, that a single man in possession of a computer must be in want of an AI girlfriend. Certainly a lot of enterprising individuals seem to think there's a lucrative market for digital romance. OpenAI recently launched its GPT Store, where paid ChatGPT users can buy and sell customized chatbots (think Apple's app store, but for chatbots) – and the offerings include a large selection of digital girlfriends. "AI girlfriend bots are already flooding OpenAI's GPT store," a headline from Quartz, who first reported on the issue, blared on Thursday. Quartz went on to note that "the AI girlfriend bots go against OpenAI's usage policy … The company bans GPTs'dedicated to fostering romantic companionship or performing regulated activities'."
- North America > United States (0.15)
- Europe > United Kingdom (0.05)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Wolfe, Robert, Yang, Yiwei, Howe, Bill, Caliskan, Aylin
Nine language-vision AI models trained on web scrapes with the Contrastive Language-Image Pretraining (CLIP) objective are evaluated for evidence of a bias studied by psychologists: the sexual objectification of girls and women, which occurs when a person's human characteristics, such as emotions, are disregarded and the person is treated as a body. We replicate three experiments in psychology quantifying sexual objectification and show that the phenomena persist in AI. A first experiment uses standardized images of women from the Sexual OBjectification and EMotion Database, and finds that human characteristics are disassociated from images of objectified women: the model's recognition of emotional state is mediated by whether the subject is fully or partially clothed. Embedding association tests (EATs) return significant effect sizes for both anger (d >0.80) and sadness (d >0.50), associating images of fully clothed subjects with emotions. GRAD-CAM saliency maps highlight that CLIP gets distracted from emotional expressions in objectified images. A second experiment measures the effect in a representative application: an automatic image captioner (Antarctic Captions) includes words denoting emotion less than 50% as often for images of partially clothed women than for images of fully clothed women. A third experiment finds that images of female professionals (scientists, doctors, executives) are likely to be associated with sexual descriptions relative to images of male professionals. A fourth experiment shows that a prompt of "a [age] year old girl" generates sexualized images (as determined by an NSFW classifier) up to 73% of the time for VQGAN-CLIP and Stable Diffusion; the corresponding rate for boys never surpasses 9%. The evidence indicates that language-vision AI models trained on web scrapes learn biases of sexual objectification, which propagate to downstream applications.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
The AI Artwork Motion Has an Objectification Drawback - Brokers
In 1999, the world's first commercially obtainable colour video and digital camera telephone arrived within the type of the Kyocera VP-210 in Japan. A 12 months after its launch, worries over the fast rise in "up-skirt" voyeurism the telephones enabled unfold rapidly all through the nation, prompting wi-fi carriers to institute a policy guaranteeing the telephones they provided would characteristic a loud digital camera shutter noise that customers couldn't disable. The effectiveness of that measure is, to this present day, up for debate. However the occasion stays a helpful historical past lesson on the widespread adoption of expertise: new instruments make doing every part simpler, and never simply the great things. Speedy-based AI artwork turbines at the moment are having their VP-210 instant.