Goto

Collaborating Authors

 political scientist


Is Cognitive Dissonance Actually a Thing?

The New Yorker

Is Cognitive Dissonance Actually a Thing? In 1934, an 8.0-magnitude earthquake hit eastern India, killing thousands and devastating several cities. Curiously, in areas that were spared the worst destruction, stories soon spread that an even bigger disaster was on its way. Leon Festinger, a young American psychologist at the University of Minnesota, read about these rumors in the early nineteen-fifties and was puzzled. Festinger didn't think people would voluntarily adopt anxiety-inducing ideas. Instead, he reasoned, the rumors could better be described as "anxiety justifying." Some had felt the earth shake and were overwhelmed with fear. When the outcome--they were spared--didn't match their emotions, they embraced predictions that affirmed their fright.


AI chatbots can sway voters better than political advertisements

MIT Technology Review

A conversation with a chatbot can shift people's political views--but the most persuasive models also spread the most misinformation. In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. My name is Ashley, and I'm an artificial intelligence volunteer for Shamaine Daniels's run for Congress," the calls began. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters' opinions in a single conversation--and they're surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate--in fact, the researchers found, the most persuasive models said the most untrue things.


AI's Fingerprints Were All Over the Election

The Atlantic - Technology

The images and videos were hard to miss in the days leading up to November 5. There was Donald Trump with the chiseled musculature of Superman, hovering over a row of skyscrapers. People had clearly used AI to create these--an effort to show support for their candidate or to troll their opponents. But the images didn't stop after Trump won. The day after polls closed, the Statue of Liberty wept into her hands as a drizzle fell around her. Trump and Elon Musk, in space suits, stood on the surface of Mars; hours later, Trump appeared at the door of the White House, waving goodbye to Harris as she walked away, clutching a cardboard box filled with flags.


Towards Hybrid Intelligence in Journalism: Findings and Lessons Learnt from a Collaborative Analysis of Greek Political Rhetoric by ChatGPT and Humans

Troboukis, Thanasis, Kiki, Kelly, Galanopoulos, Antonis, Sermpezis, Pavlos, Karamanidis, Stelios, Dimitriadis, Ilias, Vakali, Athena

arXiv.org Artificial Intelligence

This chapter introduces a research project titled "Analyzing the Political Discourse: A Collaboration Between Humans and Artificial Intelligence", which was initiated in preparation for Greece's 2023 general elections. The project focused on the analysis of political leaders' campaign speeches, employing Artificial Intelligence (AI), in conjunction with an interdisciplinary team comprising journalists, a political scientist, and data scientists. The chapter delves into various aspects of political discourse analysis, including sentiment analysis, polarization, populism, topic detection, and Named Entities Recognition (NER). This experimental study investigates the capabilities of large language model (LLMs), and in particular OpenAI's ChatGPT, for analyzing political speech, evaluates its strengths and weaknesses, and highlights the essential role of human oversight in using AI in journalism projects and potentially other societal sectors. The project stands as an innovative example of human-AI collaboration (known also as "hybrid intelligence") within the realm of digital humanities, offering valuable insights for future initiatives.


Prompt Stability Scoring for Text Annotation with Large Language Models

Barrie, Christopher, Palaiologou, Elli, Törnberg, Petter

arXiv.org Artificial Intelligence

Researchers are increasingly using language models (LMs) for text annotation. These approaches rely only on a prompt telling the model to return a given output according to a set of instructions. The reproducibility of LM outputs may nonetheless be vulnerable to small changes in the prompt design. This calls into question the replicability of classification routines. To tackle this problem, researchers have typically tested a variety of semantically similar prompts to determine what we call "prompt stability." These approaches remain ad-hoc and task specific. In this article, we propose a general framework for diagnosing prompt stability by adapting traditional approaches to intra- and inter-coder reliability scoring. We call the resulting metric the Prompt Stability Score (PSS) and provide a Python package PromptStability for its estimation. Using six different datasets and twelve outcomes, we classify >150k rows of data to: a) diagnose when prompt stability is low; and b) demonstrate the functionality of the package. We conclude by providing best practice recommendations for applied researchers.


L(u)PIN: LLM-based Political Ideology Nowcasting

Kato, Ken, Purnomo, Annabelle, Cochrane, Christopher, Saqur, Raeid

arXiv.org Artificial Intelligence

The quantitative analysis of political ideological positions is a difficult task. In the past, various literature focused on parliamentary voting data of politicians, party manifestos and parliamentary speech to estimate political disagreement and polarization in various political systems. However previous methods of quantitative political analysis suffered from a common challenge which was the amount of data available for analysis. Also previous methods frequently focused on a more general analysis of politics such as overall polarization of the parliament or party-wide political ideological positions. In this paper, we present a method to analyze ideological positions of individual parliamentary representatives by leveraging the latent knowledge of LLMs. The method allows us to evaluate the stance of politicians on an axis of our choice allowing us to flexibly measure the stance of politicians in regards to a topic/controversy of our choice. We achieve this by using a fine-tuned BERT classifier to extract the opinion-based sentences from the speeches of representatives and projecting the average BERT embeddings for each representative on a pair of reference seeds. These reference seeds are either manually chosen representatives known to have opposing views on a particular topic or they are generated sentences which where created using the GPT-4 model of OpenAI. We created the sentences by prompting the GPT-4 model to generate a speech that would come from a politician defending a particular position.


LLMs in Political Science: Heralding a New Era of Visual Analysis

Wang, Yu

arXiv.org Artificial Intelligence

Interest is increasing among political scientists in leveraging the extensive information available in images. However, the challenge of interpreting these images lies in the need for specialized knowledge in computer vision and access to specialized hardware. As a result, image analysis has been limited to a relatively small group within the political science community. This landscape could potentially change thanks to the rise of large language models (LLMs). This paper aims to raise awareness of the feasibility of using Gemini for image content analysis. A retrospective analysis was conducted on a corpus of 688 images. Content reports were elicited from Gemini for each image and then manually evaluated by the authors. We find that Gemini is highly accurate in performing object detection, which is arguably the most common and fundamental task in image analysis for political scientists. Equally important, we show that it is easy to implement as the entire command consists of a single prompt in natural language; it is fast to run and should meet the time budget of most researchers; and it is free to use and does not require any specialized hardware. In addition, we illustrate how political scientists can leverage Gemini for other image understanding tasks, including face identification, sentiment analysis, and caption generation. Our findings suggest that Gemini and other similar LLMs have the potential to drastically stimulate and accelerate image research in political science and social sciences more broadly.


Conservative women are more attractive than liberals, study says

Daily Mail - Science & tech

Conservative women are more attractive than left-wing females, according to a European study of thousands of faces. Danish and Swedish researchers tested a deep-learning artificial intelligence, called a neural network, that can predict a person's political leanings the majority of the time, based solely on their headshot. It found that right wing women more were attractive, based on a publicly available scoring system. The group found no such link in men, but did determine that the left-leaning men showed more neutral, less happy faces, suggesting perhaps better skill at guarding their emotions. The true purpose of the researchers' study, however, was to show the alarming accuracy of off-the shelf AI, which can correctly guess a person's political views based on limited information, like a simple selfie, posted to social media everyday.


Return of the People Machine

The Atlantic - Technology

Even a halfway-decent political campaign knows you better than you know yourself. A candidate's army of number crunchers vacuums up any morsel of personal information that might affect the choice we make at the polls. In 2020, Donald Trump and the Republican Party compiled 3,000 data points on every single voter in America. In 2012, the data nerds helped Barack Obama parse the electorate to microtarget his door-knocking efforts toward the most-persuadable swing voters. And in 1960, John F. Kennedy had the People Machine.


Stanford Takes on the Techlash

The New Yorker

In the fall of 2015, Rob Reich, a philosopher and a political scientist at Stanford, was chatting with a freshman during office hours. "I asked him what he planned to study," Reich recalled recently. "He said, 'Definitely computer science. I have some ideas for startups.' " In the spirit of small talk, Reich asked, What kind? "He looked at me with total earnestness and said, 'To tell you that, I'd have to ask you to sign a nondisclosure agreement.'