Goto

Collaborating Authors

 legislator


LegiGPT: Party Politics and Transport Policy with Large Language Model

Yun, Hyunsoo, Lee, Eun Hak

arXiv.org Artificial Intelligence

Given the significant influence of lawmakers' political ideologies on legislative decision-making, analyzing their impact on transportation-related policymaking is of critical importance. This study introduces a novel framework that integrates a large language model (LLM) with explainable artificial intelligence (XAI) to analyze transportation-related legislative proposals. Legislative bill data from South Korea's 21st National Assembly were used to identify key factors shaping transportation policymaking. These include political affiliations and sponsor characteristics. The LLM was employed to classify transportation-related bill proposals through a stepwise filtering process based on keywords, sentences, and contextual relevance. XAI techniques were then applied to examine the relationships between political party affiliation and associated attributes. The results revealed that the number and proportion of conservative and progressive sponsors, along with district size and electoral population, were critical determinants shaping legislative outcomes. These findings suggest that both parties contributed to bipartisan legislation through different forms of engagement, such as initiating or supporting proposals. This integrated approach offers a valuable tool for understanding legislative dynamics and guiding future policy development, with broader implications for infrastructure planning and governance.


Large Language Models are often politically extreme, usually ideologically inconsistent, and persuasive even in informational contexts

Aldahoul, Nouar, Ibrahim, Hazem, Varvello, Matteo, Kaufman, Aaron, Rahwan, Talal, Zaki, Yasir

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world. As people become increasingly reliant on them for an enormous variety of tasks, a body of academic research has developed to examine these models for inherent biases, especially political biases, often finding them small. We challenge this prevailing wisdom. First, by comparing 31 LLMs to legislators, judges, and a nationally representative sample of U.S. voters, we show that LLMs' apparently small overall partisan preference is the net result of offsetting extreme views on specific topics, much like moderate voters. Second, in a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts: voters randomized to discuss political issues with an LLM chatbot are as much as 5 percentage points more likely to express the same preferences as that chatbot. Contrary to expectations, these persuasive effects are not moderated by familiarity with LLMs, news consumption, or interest in politics. LLMs, especially those controlled by private companies or governments, may become a powerful and targeted vector for political influence.


The study of short texts in digital politics: Document aggregation for topic modeling

Nakka, Nitheesha, Yalcin, Omer F., Desmarais, Bruce A., Rajtmajer, Sarah, Monroe, Burt

arXiv.org Artificial Intelligence

Statistical topic modeling is widely used in political science to study text. Researchers examine documents of varying lengths, from tweets to speeches. There is ongoing debate on how document length affects the interpretability of topic models. We investigate the effects of aggregating short documents into larger ones based on natural units that partition the corpus. In our study, we analyze one million tweets by U.S. state legislators from April 2016 to September 2020. We find that for documents aggregated at the account level, topics are more associated with individual states than when using individual tweets. This finding is replicated with Wikipedia pages aggregated by birth cities, showing how document definitions can impact topic modeling results.


Political Actor Agent: Simulating Legislative System for Roll Call Votes Prediction with Large Language Models

Li, Hao, Gong, Ruoyuan, Jiang, Hao

arXiv.org Artificial Intelligence

Predicting roll call votes through modeling political actors has emerged as a focus in quantitative political science and computer science. Widely used embedding-based methods generate vectors for legislators from diverse data sets to predict legislative behaviors. However, these methods often contend with challenges such as the need for manually predefined features, reliance on extensive training data, and a lack of interpretability. Achieving more interpretable predictions under flexible conditions remains an unresolved issue. This paper introduces the Political Actor Agent (PAA), a novel agent-based framework that utilizes Large Language Models to overcome these limitations. By employing role-playing architectures and simulating legislative system, PAA provides a scalable and interpretable paradigm for predicting roll-call votes. Our approach not only enhances the accuracy of predictions but also offers multi-view, human-understandable decision reasoning, providing new insights into political actor behaviors. We conducted comprehensive experiments using voting records from the 117-118th U.S. House of Representatives, validating the superior performance and interpretability of PAA. This study not only demonstrates PAA's effectiveness but also its potential in political science research.


The US Needs Deepfake Porn Laws. These States Are Leading the Way

WIRED

As national legislation on deepfake pornography crawls its way through Congress, states across the country are trying to take matters into their own hands. Thirty-nine states have introduced a hodgepodge of laws designed to deter the creation of nonconsensual deepfakes and punish those who make and share them. Earlier this year, Democratic congresswoman Alexandria Ocasio-Cortez, herself a victim of nonconsensual deepfakes, introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or Defiance Act. If passed, the bill would allow victims of deepfake pornography to sue as long as they could prove the deepfakes had been made without their consent. In June, Republican senator Ted Cruz introduced the Take It Down Act, which would require platforms to remove both revenge porn and nonconsensual deepfake porn.


Adaptive Uncertainty Quantification for Generative AI

Kim, Jungeum, O'Hagan, Sean, Rockova, Veronika

arXiv.org Machine Learning

This work is concerned with conformal prediction in contemporary applications (including generative AI) where a black-box model has been trained on data that are not accessible to the user. Mirroring split-conformal inference, we design a wrapper around a black-box algorithm which calibrates conformity scores. This calibration is local and proceeds in two stages by first adaptively partitioning the predictor space into groups and then calibrating sectionally group by group. Adaptive partitioning (self-grouping) is achieved by fitting a robust regression tree to the conformity scores on the calibration set. This new tree variant is designed in such a way that adding a single new observation does not change the tree fit with overwhelmingly large probability. This add-one-in robustness property allows us to conclude a finite sample group-conditional coverage guarantee, a refinement of the marginal guarantee. In addition, unlike traditional split-conformal inference, adaptive splitting and within-group calibration yields adaptive bands which can stretch and shrink locally. We demonstrate benefits of local tightening on several simulated as well as real examples using non-parametric regression. Finally, we consider two contemporary classification applications for obtaining uncertainty quantification around GPT-4o predictions. We conformalize skin disease diagnoses based on self-reported symptoms as well as predicted states of U.S. legislators based on summaries of their ideology. We demonstrate substantial local tightening of the uncertainty sets while attaining similar marginal coverage.


New Mexico House rejects paid family leave expansion, considers political deepfake regulation

FOX News

Former New Mexico sheriff and current U.S. Senate candidate Manuel Gonzales III tells'Fox & Friends First' about his decision to join the Republican Party. New Mexico's Democrat-led House of Representatives narrowly rejected a bill Wednesday that would have guaranteed paid time off for workers to cope with serious illnesses or care for newborns and loved ones, amid concern about companies' opposition in an election year. The proposal failed 34-36 on a final vote that would have sent the bill to Gov. Michelle Lujan Grisham, whose 2019 executive order established paid family leave of up to 12 weeks for state employees. Thirteen states and Washington, D.C. currently guarantee paid leave. New Mexico already requires employers to provide paid sick leave to workers under a 2021 law.


The Discussion About A.I. Feels New and Scary. But We've Had This Conversation Many Times Before.

Slate

At the latest congressional hearing on A.I., the hype was high. "Since the release of ChatGPT just over a year ago, it's become clear A.I. could soon disrupt nearly every facet of our economy," said Rep. Nancy Mace, chair of the U.S. congressional Subcommittee on Cybersecurity, Information Technology, and Government Innovation. "The A.I. genie is out of the bottle and it can't be put back in." A.I. does seem like a genie: The technology is new and mysterious, we aren't sure exactly how it works, and we know it is very powerful. We are also afraid of it: In a poll conducted in the summer of 2023, over half of Americans said they were more concerned than excited about A.I.; there is widespread speculation about what effects the technology will have on our economy, our jobs (lolsob), our education system, our art; and tech leaders have warned that the technology puts the fate of humanity at risk.


Measurement in the Age of LLMs: An Application to Ideological Scaling

O'Hagan, Sean, Schein, Aaron

arXiv.org Artificial Intelligence

Social science pertains to complex constructs denoted by terms like "ideology", "power", or "culture", whose meanings are contextual and generally hard to pin down precisely. Although slippery and subjective, such terms are routinely used in conversation, among experts and non-experts alike, without anyone (except the occasional pedant) demanding formal definitions from their conversational partners. It is indeed a feature of natural language discourse that such terms are assumed to wear many hats, and that conversational partners must cooperate to arrive at mutually intelligible meanings. This cooperation is typically tacit, and speakers coordinate on a shared meaning by offering examples, reformulations, and engaging generally in an elaborative process that builds upon shared context and common knowledge. In so doing however, speakers inevitably introduce new terms requiring their own processes of disambiguation.


Newsom kills driverless truck safety bill, says he trusts the DMV

Los Angeles Times

The California Legislature passed a bill earlier this month to require human safety drivers in heavy-duty robot trucks for at least the next five years. On Friday, Gov. Gavin Newsom killed it. "Considering... the existing regulatory framework that presently and sufficiently governs this particular technology, this bill is not needed at this time," the governor said in a veto message. The bill was sponsored by the Teamsters union and backed by highway safety advocates. Opposed: driverless technology companies, Silicon Valley lobbyists, and various chambers of commerce and business leadership groups.