Goto

Collaborating Authors

 insult


Spat deepens between Elon Musk and Ryanair's O'Leary

BBC News

Elon Musk has suggested he could buy Ryanair and called for its chief executive to be fired amid a deepening spat between the pair. The budget airline on Tuesday branded the Tesla chief executive an idiot, and used the extraordinary row to promote its January sale. Musk and Ryanair boss Michael O'Leary have been trading insults over the past week after O'Leary rejected the idea of using Musk's Starlink technology to provide wi-fi on flights. The two are among the world's most outspoken business chiefs, with Musk the world's richest man with an estimated net worth of $769bn (£573bn), and O'Leary running Europe's busiest airline. A statement on Ryanair's X account on Tuesday evening said: Perhaps Musk needs a break?? Ryanair is launching a Great Idiots seat sale especially for Elon and any other idiots on'X'.


Importance of localized dilatation and distensibility in identifying determinants of thoracic aortic aneurysm with neural operators

Li, David S., Goswami, Somdatta, Cao, Qianying, Oommen, Vivek, Assi, Roland, Humphrey, Jay D., Karniadakis, George E.

arXiv.org Artificial Intelligence

Thoracic aortic aneurysms (TAAs) stem from diverse mechanical and mechanobiological disruptions to the aortic wall that can also increase the risk of dissection or rupture. There is increasing evidence that dysfunctions along the aortic mechanotransduction axis, including reduced integrity of elastic fibers and loss of cell-matrix connections, are particularly capable of causing thoracic aortopathy. Because different insults can produce distinct mechanical vulnerabilities, there is a pressing need to identify interacting factors that drive progression. In this work, we employ a finite element framework to generate synthetic TAAs arising from hundreds of heterogeneous insults that span a range of compromised elastic fiber integrity and cellular mechanosensing. From these simulations, we construct localized dilatation and distensibility maps throughout the aortic domain to serve as training data for neural network models to predict the initiating combined insult. Several candidate architectures (Deep Operator Networks, UNets, and Laplace Neural Operators) and input data formats are compared to establish a standard for handling future subject-specific information. We further quantify the predictive capability when networks are trained on geometric (dilatation) information alone, which mimics current clinical guidelines, versus training on both geometric and mechanical (distensibility) information. We show that prediction errors based on dilatation data are significantly higher than those based on dilatation and distensibility across all networks considered, highlighting the benefit of obtaining local distensibility measures in TAA assessment. Additionally, we identify UNet as the best-performing architecture across all training data formats.


So let's replace this phrase with insult... Lessons learned from generation of toxic texts with LLMs

Pletenev, Sergey, Moskovskiy, Daniil, Panchenko, Alexander

arXiv.org Artificial Intelligence

Modern Large Language Models (LLMs) are excellent at generating synthetic data. However, their performance in sensitive domains such as text detoxification has not received proper attention from the scientific community. This paper explores the possibility of using LLM-generated synthetic toxic data as an alternative to human-generated data for training models for detoxification. Using Llama 3 and Qwen activation-patched models, we generated synthetic toxic counterparts for neutral texts from ParaDetox and SST-2 datasets. Our experiments show that models fine-tuned on synthetic data consistently perform worse than those trained on human data, with a drop in performance of up to 30% in joint metrics. The root cause is identified as a critical lexical diversity gap: LLMs generate toxic content using a small, repetitive vocabulary of insults that fails to capture the nuances and variety of human toxicity. These findings highlight the limitations of current LLMs in this domain and emphasize the continued importance of diverse, human-annotated data for building robust detoxification systems.


Mapping Toxic Comments Across Demographics: A Dataset from German Public Broadcasting

Fillies, Jan, Hoffmann, Michael Peter, Reichel, Rebecca, Salzwedel, Roman, Bodemer, Sven, Paschke, Adrian

arXiv.org Artificial Intelligence

A lack of demographic context in existing toxic speech datasets limits our understanding of how different age groups communicate online. In collaboration with funk, a German public service content network, this research introduces the first large-scale German dataset annotated for toxicity and enriched with platform-provided age estimates. The dataset includes 3,024 human-annotated and 30,024 LLM-annotated anonymized comments from Instagram, TikTok, and YouTube. To ensure relevance, comments were consolidated using predefined toxic keywords, resulting in 16.7\% labeled as problematic. The annotation pipeline combined human expertise with state-of-the-art language models, identifying key categories such as insults, disinformation, and criticism of broadcasting fees. The dataset reveals age-based differences in toxic speech patterns, with younger users favoring expressive language and older users more often engaging in disinformation and devaluation. This resource provides new opportunities for studying linguistic variation across demographics and supports the development of more equitable and age-aware content moderation systems.


The Internet's Newest Slur Has a Bizarre Target

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. You may have run across the new "slur" making the rounds online, and in middle school lunchrooms: clanker. Borrowed from Star Wars (where battle droids get called "clankers"), the word is supposed to be a knockout insult to robots and A.I. Which would sort of make sense, if machines could actually take offense at anything. Since they can't, clanker is basically an insult that punches at nothing, perhaps the least-effective slur in history. The term, for all its silliness, has inspired a sort of spinoff--"clanker lover"--which, in theory, should carry more of a sting, since it's aimed at actual humans.


Trump lashes out at Crockett, renews call for cognitive test

FOX News

President Donald Trump has renewed his call for Rep. Jasmine Crockett, D-Texas, to undergo a cognitive test. "'Congresswoman' Jasmine Crockett is a Low (Very!!!) I.Q. Individual, much in the mold of the AOC Plus Three Gang of Country Destroying Morons - Only slightly dumber," Trump wrote on TRUTH Social on Monday. "Each of these political hacks should be forced to take a Cognitive Exam, much like the one I recently took while getting my'physical' at our GREAT Washington, D.C., Military Hospital (WR!)," Trump said. "As the doctors said, 'President Trump ACED it, something that is rarely seen!' These Radical Left Lunatics would all fail this test in a spectacular show of stupidity and incompetence. President Donald Trump demanded Texas Democrat Jasmine Crockett take a cognitive test as their public feud escalates. Trump previously said Rep. Alexandria Ocasio-Cortez, D-N.Y., should take a cognitive test in June when the progressive "Squad" leader demanded his impeachment over the U.S. strikes on Iranian nuclear facilities. Meanwhile, as the White House pushes Republican states to redistrict mid-cycle ahead of the 2026 midterm elections, Crockett has accused Trump of pushing a "white supremacy agenda" and "diluting the voices of people of color." The Trump administration asserts that Democratic states have engaged in "gerrymandering" for years and encouraged illegal immigration to boost their congressional influence. In Texas, Democratic state lawmakers fled the state in an effort to stop the vote on a GOP redistricting plan that likely would have resulted in Republicans picking up five House seats. Crockett has accused Trump of hurling the low IQ insult as a racially-coded tactic to insult "people of color," including "The Breakfast Club" host Charlamagne tha God. Rep. Jasmine Crockett, D-Texas, joins Texas state Democrats for a press conference on Aug. 4, 2025 in Warrenville, Illinois. "Newsflash, Wannabe Dictator: I don't care how many times you shake the Etch A Sketch trying to redraw these lines," Crockett wrote on X last week. I'll be back, still on your behind every step of the way. We've already been over this. I've got the degrees, the credentials, and the receipts. Despite the president describing her as having a low IQ, Crockett said Trump has the "most incompetent Cabinet in the history of this country," referring to the Signal-gate scandal earlier this year. Crockett has also dubbed Trump a "Temu dictator." At a progressive rally in Phoenix, Arizona, earlier this month, the congresswoman said on stage, "Donald Trump is a piece of sh--." "This is a person who has a problem with people of color.


ChildGuard: A Specialized Dataset for Combatting Child-Targeted Hate Speech

Kashyap, Gautam Siddharth, Azeez, Mohammad Anas, Ali, Rafiq, Siddiqui, Zohaib Hasan, Gao, Jiechao, Naseem, Usman

arXiv.org Artificial Intelligence

Hate speech targeting children on social media is a serious and growing problem, yet current NLP systems struggle to detect it effectively. This gap exists mainly because existing datasets focus on adults, lack age specific labels, miss nuanced linguistic cues, and are often too small for robust modeling. To address this, we introduce ChildGuard, the first large scale English dataset dedicated to hate speech aimed at children. It contains 351,877 annotated examples from X (formerly Twitter), Reddit, and YouTube, labeled by three age groups: younger children (under 11), pre teens (11--12), and teens (13--17). The dataset is split into two subsets for fine grained analysis: a contextual subset (157K) focusing on discourse level features, and a lexical subset (194K) emphasizing word-level sentiment and vocabulary. Benchmarking state of the art hate speech models on ChildGuard reveals notable drops in performance, highlighting the challenges of detecting child directed hate speech.


I'm a Polite Person. But in This One Specific Situation, I Recommend Being a Total Jerk.

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Fairly recently, I started being verbally abusive to large language models. I highly recommend you experiment with doing so yourself. Over the past 30 days, I have called large language models (primarily OpenAI's paid product) the following names, among others that I won't repeat here because my mom might read this: Dipshit, fucknuts, shitstain, dummy, dumbass, dum-dum fucking dumbass dum-dum, numbnuts, hockey puck (thank you, Don Rickles), turdburger, lickspittle, cockroach, fucking cockroach (thank you, Tony Montana), idiot, fucking idiot, total fucking idiot, and fucking numbnuts dipshit. Ethan Mollick, author of Co-Intelligence: Living and Working With AI, and currently the reigning A.I. whisperer for the consultant class, says that anthropomorphizing A.I. is "a sin of necessity."


Polarized Online Discourse on Abortion: Frames and Hostile Expressions among Liberals and Conservatives

Rao, Ashwin, Chang, Rong-Ching, Zhong, Qiankun, Lerman, Kristina, Wojcieszak, Magdalena

arXiv.org Artificial Intelligence

Abortion has been one of the most divisive issues in the United States. Yet, missing is comprehensive longitudinal evidence on how political divides on abortion are reflected in public discourse over time, on a national scale, and in response to key events before and after the overturn of Roe v Wade. We analyze a corpus of over 3.5M tweets related to abortion over the span of one year (January 2022 to January 2023) from over 1.1M users. We estimate users' ideology and rely on state-of-the-art transformer-based classifiers to identify expressions of hostility and extract five prominent frames surrounding abortion. We use those data to examine (a) how prevalent were expressions of hostility (i.e., anger, toxic speech, insults, obscenities, and hate speech), (b) what frames liberals and conservatives used to articulate their positions on abortion, and (c) the prevalence of hostile expressions in liberals and conservative discussions of these frames. We show that liberals and conservatives largely mirrored each other's use of hostile expressions: as liberals used more hostile rhetoric, so did conservatives, especially in response to key events. In addition, the two groups used distinct frames and discussed them in vastly distinct contexts, suggesting that liberals and conservatives have differing perspectives on abortion. Lastly, frames favored by one side provoked hostile reactions from the other: liberals use more hostile expressions when addressing religion, fetal personhood, and exceptions to abortion bans, whereas conservatives use more hostile language when addressing bodily autonomy and women's health. This signals disrespect and derogation, which may further preclude understanding and exacerbate polarization.


Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models

Erol, Abdulkadir, Padhi, Trilok, Saha, Agnik, Kursuncu, Ugur, Aktas, Mehmet Emin

arXiv.org Artificial Intelligence

The rapid advancement of Large Vision-Language Models (LVLMs) has enhanced capabilities offering potential applications from content creation to productivity enhancement. Despite their innovative potential, LVLMs exhibit vulnerabilities, especially in generating potentially toxic or unsafe responses. Malicious actors can exploit these vulnerabilities to propagate toxic content in an automated (or semi-) manner, leveraging the susceptibility of LVLMs to deception via strategically crafted prompts without fine-tuning or compute-intensive procedures. Despite the red-teaming efforts and inherent potential risks associated with the LVLMs, exploring vulnerabilities of LVLMs remains nascent and yet to be fully addressed in a systematic manner. This study systematically examines the vulnerabilities of open-source LVLMs, including LLaVA, InstructBLIP, Fuyu, and Qwen, using adversarial prompt strategies that simulate real-world social manipulation tactics informed by social theories. Our findings show that (i) toxicity and insulting are the most prevalent behaviors, with the mean rates of 16.13% and 9.75%, respectively; (ii) Qwen-VL-Chat, LLaVA-v1.6-Vicuna-7b, and InstructBLIP-Vicuna-7b are the most vulnerable models, exhibiting toxic response rates of 21.50%, 18.30% and 17.90%, and insulting responses of 13.40%, 11.70% and 10.10%, respectively; (iii) prompting strategies incorporating dark humor and multimodal toxic prompt completion significantly elevated these vulnerabilities. Despite being fine-tuned for safety, these models still generate content with varying degrees of toxicity when prompted with adversarial inputs, highlighting the urgent need for enhanced safety mechanisms and robust guardrails in LVLM development.