obama
Chabria: 3 things that should scare us about Trump's fake video of Obama
On Sunday, our thoughtful and reserved president reposted on his Truth Social site a video generated by artificial intelligence that falsely showed former President Obama being arrested and imprisoned. There are those among you who think this is high humor; those among you who who find it as tiresome as it is offensive; and those among you blissfully unaware of the mental morass that is Truth Social. Whatever camp you fall into, the video crosses all demographics by being expected -- just another crazy Trump stunt in a repetitive cycle of division and diversion so frequent it makes Groundhog Day seem fresh. But there are three reasons why this particular video -- not made by the president but amplified to thousands -- is worth noting, and maybe even worth fearing. First, it is flat-out racist. In it, Obama is ripped out of a chair in the Oval Office and forced onto his knees, almost bowing, to a laughing Trump.
- North America > United States > Ohio (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Law (0.95)
The Curious Case of Factuality Finetuning: Models' Internal Beliefs Can Improve Factuality
Newman, Benjamin, Ravichander, Abhilasha, Jung, Jaehun, Xin, Rui, Ivison, Hamish, Kuznetsov, Yegor, Koh, Pang Wei, Choi, Yejin
Language models are prone to hallucination - generating text that is factually incorrect. Finetuning models on high-quality factual information can potentially reduce hallucination, but concerns remain; obtaining factual gold data can be expensive and training on correct but unfamiliar data may potentially lead to even more downstream hallucination. What data should practitioners finetune on to mitigate hallucinations in language models? In this work, we study the relationship between the factuality of finetuning data and the prevalence of hallucinations in long-form generation tasks. Counterintuitively, we find that finetuning on factual gold data is not as helpful as finetuning on model-generated data that models believe to be factual. Next, we evaluate filtering strategies applied on both factual gold data and model-generated data, and find that finetuning on model-generated data that is filtered by models' own internal judgments often leads to better overall factuality compared to other configurations: training on gold data filtered by models' judgments, training on gold data alone, or training on model-generated data that is supported by gold data. These factuality improvements transfer across three domains we study, suggesting that a models' own beliefs can provide a powerful signal for factuality.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > Singapore (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (17 more...)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Dermatology (1.00)
- Government > Voting & Elections (1.00)
- (5 more...)
Donald Trump Wants to Save the Coal Industry. He's Too Late
On Tuesday, President Donald Trump held a press conference to announce the signing of executive orders intended to shape American energy policy in favor of one particular source: coal, the most carbon-intense fossil fuel. "I call it beautiful, clean coal," President Trump said while flanked by a crowd of miners at the White House. "I tell my people never use the word coal, unless you put'beautiful, clean' before it." Trump has talked about saving coal, and coal jobs, for as long as he's been in politics. This time, he's got a convenient vehicle for his policies: the growth of AI and data centers, which could potentially supercharge American energy demand over the coming years.
- Materials > Metals & Mining > Coal (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy (1.00)
Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Xu, Shanshan, Santosh, T. Y. S. S, Elazar, Yanai, Vogel, Quirin, Plank, Barbara, Grabmair, Matthias
The increased adoption of Large Language Models (LLMs) and their potential to shape public opinion have sparked interest in assessing these models' political leanings. Building on previous research that compared LLMs and human opinions and observed political bias in system responses, we take a step further to investigate the underlying causes of such biases by empirically examining how the values and biases embedded in training corpora shape model outputs. Specifically, we propose a method to quantitatively evaluate political leanings embedded in the large pretraining corpora. Subsequently we investigate to whom are the LLMs' political leanings more aligned with, their pretrainig corpora or the surveyed human opinions. As a case study, we focus on probing the political leanings of LLMs in 32 U.S. Supreme Court cases, addressing contentious topics such as abortion and voting rights. Our findings reveal that LLMs strongly reflect the political leanings in their training data, and no strong correlation is observed with their alignment to human opinions as expressed in surveys. These results underscore the importance of responsible curation of training data and the need for robust evaluation metrics to ensure LLMs' alignment with human-centered values.
- Asia > Thailand (0.14)
- North America > United States > Illinois (0.14)
- North America > Canada (0.14)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Law > Government & the Courts (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Question-to-Question Retrieval for Hallucination-Free Knowledge Access: An Approach for Wikipedia and Wikidata Question Answering
This paper introduces an approach to question answering over knowledge bases like Wikipedia and Wikidata by performing "question-to-question" matching and retrieval from a dense vector embedding store. Instead of embedding document content, we generate a comprehensive set of questions for each logical content unit using an instruction-tuned LLM. These questions are vector-embedded and stored, mapping to the corresponding content. Vector embedding of user queries are then matched against this question vector store. The highest similarity score leads to direct retrieval of the associated article content, eliminating the need for answer generation. Our method achieves high cosine similarity ( > 0.9 ) for relevant question pairs, enabling highly precise retrieval. This approach offers several advantages including computational efficiency, rapid response times, and increased scalability. We demonstrate its effectiveness on Wikipedia and Wikidata, including multimedia content through structured fact retrieval from Wikidata, opening up new pathways for multimodal question answering.
- Europe > France (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.05)
- (6 more...)
Harris' 'ice princess' demeanor, Bush's belly-tap were key expressions at Jimmy Carter's funeral: expert
Presidents Clinton, George H.W. Bush, Obama, Biden and Trump all pay respect to Jimmy Carter at his state funeral in Washington, D.C.. During the 2024 campaign cycle, Americans witnessed what appeared to be no love lost between President-elect Donald Trump and former President Barack Obama. However, at former President Jimmy Carter's funeral the two recent presidents appeared to be enjoying each other's company and largely ignored other dignitaries arriving around them, including Vice President Kamala Harris and President Biden. Susan Constantine, a communication and body language expert, said Harris came off "as cool as could be." When she was walking she was very robotic.
- North America > United States > District of Columbia > Washington (0.25)
- North America > United States > New York (0.06)
- North America > United States > Pennsylvania (0.05)
TimelineKGQA: A Comprehensive Question-Answer Pair Generator for Temporal Knowledge Graphs
Sun, Qiang, Li, Sirui, Huynh, Du, Reynolds, Mark, Liu, Wei
Question answering over temporal knowledge graphs (TKGs) is crucial for understanding evolving facts and relationships, yet its development is hindered by limited datasets and difficulties in generating custom QA pairs. We propose a novel categorization framework based on timeline-context relationships, along with \textbf{TimelineKGQA}, a universal temporal QA generator applicable to any TKGs. The code is available at: \url{https://github.com/PascalSun/TimelineKGQA} as an open source Python package.
- Oceania > Australia > Western Australia > Perth (0.05)
- Oceania > Australia > New South Wales > Sydney (0.05)
- Asia > Indonesia (0.05)
- (2 more...)
- Education (0.47)
- Government > Regional Government > North America Government > United States Government (0.47)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Temporal Reasoning (0.72)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.64)
- Information Technology > Artificial Intelligence > Natural Language > Question Answering (0.57)
GPT as ghostwriter at the White House
Recently several large language models (LLMs) have demonstrated their capability to generate a message in response to a user request. Such scientific breakthroughs promote new perspectives but also some fears. The main focus of this study is to analyze the written style of one LLM called ChatGPT 3.5 by comparing its generated messages with those of the recent US presidents. To achieve this objective, we compare the State of the Union addresses written by Reagan to Obama with those automatically produced by ChatGPT. We found that ChatGPT tends to overuse the lemma "we" as well as nouns and commas. On the other hand, the generated speeches employ less verbs and include, in mean, longer sentences. Even when imposing a given style to ChatGPT, the resulting speech remains distinct from messages written by the target author. Moreover, ChatGPT opts for a neutral tone with mainly positive emotional expressions and symbolic terms (e.g., freedom, nation). Finally, we show that the GPT's style exposes distinct features compared to real presidential addresses.
- Asia > Middle East > Iraq (0.14)
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (8 more...)
Something That Both Candidates Secretly Agree On
If the presidential election has provided relief from anything, it has been the generative-AI boom. Neither Kamala Harris nor Donald Trump has made much of the technology in their public messaging, and they have not articulated particularly detailed AI platforms. Bots do not seem to rank among the economy, immigration, abortion rights, and other issues that can make or break campaigns. Americans are very invested, and very worried, about the future of artificial intelligence. Polling consistently shows that a majority of adults from both major parties support government regulation of AI, and that demand for regulation might even be growing.
- North America > United States > Oklahoma (0.05)
- North America > United States > California (0.05)
- Asia > China (0.05)
- Law > Statutes (1.00)
- Government > Voting & Elections (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Uncovering Biases with Reflective Large Language Models
Biases inherent in human endeavors pose significant challenges for machine learning, particularly in supervised learning that relies on potentially biased "ground truth" data. This reliance, coupled with models' tendency to generalize based on statistical maximal likelihood, can propagate and amplify biases, exacerbating societal issues. To address this, our study proposes a reflective methodology utilizing multiple Large Language Models (LLMs) engaged in a dynamic dialogue to uncover diverse perspectives. By leveraging conditional statistics, information theory, and divergence metrics, this novel approach fosters context-dependent linguistic behaviors, promoting unbiased outputs. Furthermore, it enables measurable progress tracking and explainable remediation actions to address identified biases.
- Europe > Germany (0.04)
- Africa > Middle East > Libya > Benghazi District > Benghazi (0.04)
- Asia > Middle East > Syria (0.04)
- (7 more...)
- Media > News (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (7 more...)