Goto

Collaborating Authors

 negativity


Affective Polarization across European Parliaments

Evkoski, Bojan, Mozetič, Igor, Ljubešić, Nikola, Novak, Petra Kralj

arXiv.org Artificial Intelligence

Affective polarization, characterized by increased negativity and hostility towards opposing groups, has become a prominent feature of political discourse worldwide. Our study examines the presence of this type of polarization in a selection of European parliaments in a fully automated manner. Utilizing a comprehensive corpus of parliamentary speeches from the parliaments of six European countries, we employ natural language processing techniques to estimate parliamentarian sentiment. By comparing the levels of negativity conveyed in references to individuals from opposing groups versus one's own, we discover patterns of affectively polarized interactions. The findings demonstrate the existence of consistent affective polarization across all six European parliaments. Although activity correlates with negativity, there is no observed difference in affective polarization between less active and more active members of parliament. Finally, we show that reciprocity is a contributing mechanism in affective polarization between parliamentarians across all six parliaments.


ChatGPT Reads Your Tone and Responds Accordingly -- Until It Does Not -- Emotional Framing Induces Bias in LLM Outputs

Bardol, Franck

arXiv.org Artificial Intelligence

Large Language Models like GPT-4 adjust their responses not only based on the question asked, but also on how it is emotionally phrased. We systematically vary the emotional tone of 156 prompts - spanning controversial and everyday topics - and analyze how it affects model responses. Our findings show that GPT-4 is three times less likely to respond negatively to a negatively framed question than to a neutral one. This suggests a "rebound" bias where the model overcorrects, often shifting toward neutrality or positivity. On sensitive topics (e.g., justice or politics), this effect is even more pronounced: tone-based variation is suppressed, suggesting an alignment override. We introduce concepts like the "tone floor" - a lower bound in response negativity - and use tone-valence transition matrices to quantify behavior. Visualizations based on 1536-dimensional embeddings confirm semantic drift based on tone. Our work highlights an underexplored class of biases driven by emotional framing in prompts, with implications for AI alignment and trust. Code and data are available at: https://github.com/bardolfranck/llm-responses-viewer


Who Attacks, and Why? Using LLMs to Identify Negative Campaigning in 18M Tweets across 19 Countries

Hartman, Victor, Törnberg, Petter

arXiv.org Artificial Intelligence

Negative campaigning is a central feature of political competition, yet empirical research has been limited by the high cost and limited scalability of existing classification methods. This study makes two key contributions. First, it introduces zero-shot Large Language Models (LLMs) as a novel approach for cross-lingual classification of negative campaigning. Using benchmark datasets in ten languages, we demonstrate that LLMs achieve performance on par with native-speaking human coders and outperform conventional supervised machine learning approaches. Second, we leverage this novel method to conduct the largest cross-national study of negative campaigning to date, analyzing 18 million tweets posted by parliamentarians in 19 European countries between 2017 and 2022. The results reveal consistent cross-national patterns: governing parties are less likely to use negative messaging, while ideologically extreme and populist parties -- particularly those on the radical right -- engage in significantly higher levels of negativity. These findings advance our understanding of how party-level characteristics shape strategic communication in multiparty systems. More broadly, the study demonstrates the potential of LLMs to enable scalable, transparent, and replicable research in political communication across linguistic and cultural contexts.


Quantum State Generation with Structure-Preserving Diffusion Model

Zhu, Yuchen, Chen, Tianrong, Theodorou, Evangelos A., Chen, Xie, Tao, Molei

arXiv.org Machine Learning

This article considers the generative modeling of the states of quantum systems, and an approach based on denoising diffusion model is proposed. The key contribution is an algorithmic innovation that respects the physical nature of quantum states. More precisely, the commonly used density matrix representation of mixed-state has to be complex-valued Hermitian, positive semi-definite, and trace one. Generic diffusion models, or other generative methods, may not be able to generate data that strictly satisfy these structural constraints, even if all training data do. To develop a machine learning algorithm that has physics hard-wired in, we leverage the recent development of Mirror Diffusion Model and design a previously unconsidered mirror map, to enable strict structure-preserving generation. Both unconditional generation and conditional generation via classifier-free guidance are experimentally demonstrated efficacious, the latter even enabling the design of new quantum states when generated on unseen labels.


Identification of quantum entanglement with Siamese convolutional neural networks and semi-supervised learning

Pawłowski, Jarosław, Krawczyk, Mateusz

arXiv.org Artificial Intelligence

Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of identifying entanglement has still not reached a general solution for systems larger than two qubits. In this study, we use deep convolutional neural networks, a type of supervised machine learning, to identify quantum entanglement for any bipartition in a 3-qubit system. We demonstrate that training the model on synthetically generated datasets of random density matrices excluding challenging positive-under-partial-transposition entangled states (PPTES), which cannot be identified (and correctly labeled) in general, leads to good model accuracy even for PPTES states, that were outside the training data. Our aim is to enhance the model's generalization on PPTES. By applying entanglement-preserving symmetry operations through a triple Siamese network trained in a semi-supervised manner, we improve the model's accuracy and ability to recognize PPTES. Moreover, by constructing an ensemble of Siamese models, even better generalization is observed, in analogy with the idea of finding separate types of entanglement witnesses for different classes of states. The neural models' code and training schemes, as well as data generation procedures, are available at github.com/Maticraft/quantum_correlations.


Can large language models generate salient negative statements?

Arnaout, Hiba, Razniewski, Simon

arXiv.org Artificial Intelligence

We examine the ability of large language models (LLMs) to generate salient (interesting) negative statements about real-world entities; an emerging research topic of the last few years. We probe the LLMs using zero- and k-shot unconstrained probes, and compare with traditional methods for negation generation, i.e., pattern-based textual extractions and knowledge-graph-based inferences, as well as crowdsourced gold statements. We measure the correctness and salience of the generated lists about subjects from different domains. Our evaluation shows that guided probes do in fact improve the quality of generated negatives, compared to the zero-shot variant. Nevertheless, using both prompts, LLMs still struggle with the notion of factuality of negatives, frequently generating many ambiguous statements, or statements with negative keywords but a positive meaning.


Stochastic Parrots Looking for Stochastic Parrots: LLMs are Easy to Fine-Tune and Hard to Detect with other LLMs

Henrique, Da Silva Gameiro, Kucharavy, Andrei, Guerraoui, Rachid

arXiv.org Artificial Intelligence

The self-attention revolution allowed generative language models to scale and achieve increasingly impressive abilities. Such models - commonly referred to as Large Language Models (LLMs) - have recently gained prominence with the general public, thanks to conversational fine-tuning, putting their behavior in line with public expectations regarding AI. This prominence amplified prior concerns regarding the misuse of LLMs and led to the emergence of numerous tools to detect LLMs in the wild. Unfortunately, most such tools are critically flawed. While major publications in the LLM detectability field suggested that LLMs were easy to detect with fine-tuned autoencoders, the limitations of their results are easy to overlook. Specifically, they assumed publicly available generative models without fine-tunes or non-trivial prompts. While the importance of these assumptions has been demonstrated, until now, it remained unclear how well such detection could be countered. Here, we show that an attacker with access to such detectors' reference human texts and output not only evades detection but can fully frustrate the detector training - with a reasonable budget and all its outputs labeled as such. Achieving it required combining common "reinforcement from critic" loss function modification and AdamW optimizer, which led to surprisingly good fine-tuning generalization. Finally, we warn against the temptation to transpose the conclusions obtained in RNN-driven text GANs to LLMs due to their better representative ability. These results have critical implications for the detection and prevention of malicious use of generative language models, and we hope they will aid the designers of generative models and detectors.


Words that Wound: The Impact of Biased Language on News Sentiment and Stock Market Index

Kim, Wonseong

arXiv.org Artificial Intelligence

This study investigates the impact of biased language, specifically 'Words that Wound,' on sentiment analysis in a dataset of 45,379 South Korean daily economic news articles. Using Word2Vec, cosine similarity, and an expanded lexicon, we analyzed the influence of these words on news titles' sentiment scores. Our findings reveal that incorporating biased language significantly amplifies sentiment scores' intensity, particularly negativity. The research examines the effect of heightened negativity in news titles on the KOSPI200 index using linear regression and sentiment analysis. Results indicate that the augmented sentiment lexicon (Sent1000), which includes the top 1,000 negative words with high cosine similarity to 'Crisis,' more effectively captures the impact of news sentiment on the stock market index than the original KNU sentiment lexicon (Sent0). The ARDL model and Impulse Response Function (IRF) analyses disclose that Sent1000 has a stronger and more persistent impact on KOSPI200 compared to Sent0. These findings emphasize the importance of understanding language's role in shaping market dynamics and investor sentiment, particularly the impact of negatively biased language on stock market indices. The study highlights the need for considering context and linguistic nuances when analyzing news content and its potential effects on public opinion and market dynamics.


Guide to Negative Prompts Stable Diffusion AI Art

#artificialintelligence

Welcome to the tale of the tantalizing and treacherous world of negative prompt AI. An image is worth a thousand words, but what happens when that image is generated by an algorithm, fed by a seed of negativity? Join us on a journey through the anatomy of AI art, where the lines between reality and render are becoming increasingly blurred. We want to see what happens when we experiment with stable diffusion, and the results are nothing short of mind-blowing. Have you ever wondered what it would look like if you could generate realistic images with just a touch of negativity?


Bad news: Headlines are indeed getting more negative and angrier

Al Jazeera

A number of commentators have argued in recent years that the media overemphasises negativity in its content. Answering this question is no trivial matter, as it requires a standard against which the media's coverage can be compared. That is, it is challenging to establish how negative or positive media content should be. What we can certainly determine instead is how the sentiment (positive or negative) and emotional undertones (such as fear, anger or joy) of news content compare with the same metrics at different points in time. This allows us to establish whether news media content is becoming more positive over time, more negative or pretty much staying the same.

  Country:
  Industry: Media > News (0.82)