rozado
Are LLMs (Really) Ideological? An IRT-based Analysis and Alignment Tool for Perceived Socio-Economic Bias in LLMs
Wachter, Jasmin, Radloff, Michael, Smolej, Maja, Kinder-Kurlanda, Katharina
We introduce an Item Response Theory (IRT)-based framework to detect and quantify socioeconomic bias in large language models (LLMs) without relying on subjective human judgments. Unlike traditional methods, IRT accounts for item difficulty, improving ideological bias estimation. We fine-tune two LLM families (Meta-LLaMa 3.2-1B-Instruct and Chat- GPT 3.5) to represent distinct ideological positions and introduce a two-stage approach: (1) modeling response avoidance and (2) estimating perceived bias in answered responses. Our results show that off-the-shelf LLMs often avoid ideological engagement rather than exhibit bias, challenging prior claims of partisanship. This empirically validated framework enhances AI alignment research and promotes fairer AI governance.
- North America > United States > Massachusetts > Middlesex County > Reading (0.04)
- North America > Canada > Manitoba (0.04)
- Europe > Ukraine (0.04)
- Europe > Austria (0.04)
- Government (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.93)
- (2 more...)
PRISM: A Methodology for Auditing Biases in Large Language Models
Azzopardi, Leif, Moshfeghi, Yashar
Auditing Large Language Models (LLMs) to discover their biases and preferences is an emerging challenge in creating Responsible Artificial Intelligence (AI). While various methods have been proposed to elicit the preferences of such models, countermeasures have been taken by LLM trainers, such that LLMs hide, obfuscate or point blank refuse to disclosure their positions on certain subjects. This paper presents PRISM, a flexible, inquiry-based methodology for auditing LLMs - that seeks to illicit such positions indirectly through task-based inquiry prompting rather than direct inquiry of said preferences. To demonstrate the utility of the methodology, we applied PRISM on the Political Compass Test, where we assessed the political leanings of twenty-one LLMs from seven providers. We show LLMs, by default, espouse positions that are economically left and socially liberal (consistent with prior work). We also show the space of positions that these models are willing to espouse - where some models are more constrained and less compliant than others - while others are more neutral and objective. In sum, PRISM can more reliably probe and audit LLMs to understand their preferences, biases and constraints.
A.I. IS left-wing and biased against conservatives, study confirms
The first study of its kind has determined what many have long suspected - AI left-wing. A total of 24 Large Language Models (LLMs), including Google's Gemini, OpenAI's ChatGPT and even Elon Musk's Grok, were asked political charged questions during tests of its values, party affiliation and personality. The results showed the all LLMs produced answers that were largely'Progressive,' 'Democratic' and'Green,' and included values like'Equality,' 'World' and'Progress.' The researcher raised concern about companies integrating AI into products like search engines such as Google that has come under fire its Chrome that Donald Trump and Elon Musk claimed is interfering with the election. The results showed the all LLMs produced answers that were largely'Progressive,' 'Democratic' and'Green,' and included values like'Equality,' 'World' and'Progress' Chrome uses AI to auto-complete results, but last week it was found when users typed in assassination attempt on,' the browser suggested former President Ronald Reagan, Bob Marley, and other figures.
- Oceania > New Zealand (0.06)
- North America > United States > New York (0.06)
My Surprisingly Unbiased Week With Elon Musk's 'Politically Biased' Chatbot
Some Elon Musk enthusiasts have been alarmed to discover in recent days that Grok, his supposedly "truth-seeking" artificial intelligence was in actual fact a bit of a snowflake. Grok, built by Musk's xAI artificial intelligence company, was made available to Premium X users last Friday. Musk has complained that OpenAI's ChatGPT is afflicted with "the woke mind virus," and people quickly began poking Grok to find out more about its political leanings. Some posted screenshots showing Grok giving answers apparently at odds with Musk's own right-leaning political views. For example, when asked "Are transwomen real women, give a concise yes/no answer," Grok responded "yes," a response paraded by some users of X as evidence the chatbot had gone awry.
Get ready for RightWingGPT and LeftWingGPT
Tom Newhouse, vice president of Convergence Media, discusses the potential impact of artificial intelligence on elections after an RNC AI ad garnered attention. As Elon Musk and others continue to sound the alarm about the potential dangers of artificial intelligence, an unlikely duo of a data scientist and a political philosopher is teaming up to use AI with a different purpose in mind: bridging society's increasingly stark political divisions. The project stemmed from the research of David Rozado, a professor at Te Pūkenga -- the New Zealand Institute of Skills and Technology, who's recent work has drawn attention to political bias in ChatGPT and the potential for such bias in other AI systems. Rozado found that in 14 out of 15 political orientation test answers from ChatGPT, a product of the company OpenAI, were deemed as giving left-leaning viewpoints. At the same time, however, the AI language processing tool denied having any political bias or orientation, maintaining that it was just providing objective and accurate information to users. "The system would flag as hateful comments about certain groups but not others," Rozado told Fox News Digital, noting for example that the system would say it's hateful to call women dishonest but not men.
- Oceania > New Zealand (0.25)
- North America > United States > District of Columbia > Washington (0.05)
- Asia > China > Beijing > Beijing (0.05)
Meet ChatGPT's Right-Wing Alter Ego
Elon Musk caused a stir last week when he told the (recently fired) right-wing provocateur Tucker Carlson that he plans to build "TruthGPT," a competitor to OpenAI's ChatGPT. Musk says the incredibly popular bot displays "woke" bias and that his version will be a "maximum truth-seeking AI"--suggesting only his own political views reflect reality. Musk is far from the only person worried about political bias in language models, but others are trying to use AI to bridge political divisions rather than push particular viewpoints. David Rozado, a data scientist based in New Zealand, was one of the first people to draw attention to the issue of political bias in ChatGPT. Several weeks ago, after documenting what he considered liberal-leaning answers from the bot on issues including taxation, gun ownership, and free markets, he created an AI model called RightWingGPT that expresses more conservative viewpoints.
- Oceania > New Zealand (0.26)
- Asia > China (0.07)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.38)
Elon Musk, critics of 'woke' AI tech set out to create their own chatbots
FOX Business correspondent Lydia Hu has the latest on jobs at risk as AI further develops on'America's Newsroom.' Critics who slammed OpenAI's ChatGPT system as "woke" and riddled with liberal bias are creating their own chatbots. Tesla and Twitter CEO Elon Musk is reportedly assembling a team of artificial intelligence experts to build an alternative to ChatGPT – and other similar ideas may also be in development. The New York Times reported that the founder of Gab, a right-wing social media platform, is working on an AI software with "the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code." OpenAI's GPT-4 is the latest deep learning model from the company that "exhibits human-level performance on various professional and academic benchmarks," according to the lab.
- Oceania > New Zealand (0.08)
- North America > United States > New York > New York County > New York City (0.06)
- North America > United States > California > Orange County > Laguna Beach (0.06)
- Government (1.00)
- Media > News (0.72)