Goto

Collaborating Authors

 sen


Challenges to Pelosi part of broader movement to replace the Democratic Party's old guard

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Challenges to Pelosi part of broader movement to replace the Democratic Party's old guard Rep. Nancy Pelosi, shown talking to reporters in the U.S. Capitol on Oct. 1, has not said whether she will seek another term in 2026. This is read by an automated voice. Please report any issues or inconsistencies here . Younger Democratic candidates are challenging older incumbents amid increasing frustration over the party's ineffective resistance to President Trump.


EviNote-RAG: Enhancing RAG Models via Answer-Supportive Evidence Notes

Dai, Yuqin, Wang, Guoqing, Wang, Yuan, Dou, Kairan, Zhou, Kaichen, Zhang, Zhanwei, Yang, Shuo, Tang, Fei, Yin, Jun, Zeng, Pengyu, Ying, Zhenzhe, Yi, Can, Meng, Changhua, Zhou, Yuchen, Shen, Yongliang, Lu, Shuai

arXiv.org Artificial Intelligence

Retrieval-Augmented Generation (RAG) has advanced open-domain question answering by incorporating external information into model reasoning. However, effectively leveraging external information to enhance reasoning presents the following challenges: (1) low signal-to-noise ratio, where answer-supportive external information is diluted by irrelevant material, and (2) error accumulation, which arises in multi-hop reasoning when incomplete or misleading information is incorporated. To address these challenges, we introduce EviNote-RAG, a framework that follows a retrieve-note-answer workflow. Instead of reasoning directly over raw external information, the model first produces Supportive-Evidence Notes (SENs), which concisely preserve answer-critical information and explicitly mark key and uncertainty information to improve accuracy. We further design an entailment-based Evidence Quality Reward (EQR) to ensure that SENs are logically sufficient to derive the final answer, thereby enhancing SENs' quality. Experiments on both in-domain and out-of-domain QA benchmarks show that EviNote-RAG achieves state-of-the-art performance, improving answer accuracy, training stability, robustness, and efficiency. In particular, it yields relative F1 gains of 20% on HotpotQA (+0.093), 40% on Bamboogle (+0.151), and 91% on 2Wiki (+0.256), benefiting from improvements in the reasoning process.



Set-Rationalizable Choice and Self-Stability

Brandt, Felix, Harrenstein, Paul

arXiv.org Artificial Intelligence

A common assumption in modern microeconomic theory is that choice should be rationalizable via a binary preference relation, which \citeauthor{Sen71a} showed to be equivalent to two consistency conditions, namely $α$ (contraction) and $γ$ (expansion). Within the context of \emph{social} choice, however, rationalizability and similar notions of consistency have proved to be highly problematic, as witnessed by a range of impossibility results, among which Arrow's is the most prominent. Since choice functions select \emph{sets} of alternatives rather than single alternatives, we propose to rationalize choice functions by preference relations over sets (set-rationalizability). We also introduce two consistency conditions, $\hatα$ and $\hatγ$, which are defined in analogy to $α$ and $γ$, and find that a choice function is set-rationalizable if and only if it satisfies $\hatα$. Moreover, a choice function satisfies $\hatα$ and $\hatγ$ if and only if it is \emph{self-stable}, a new concept based on earlier work by \citeauthor{Dutt88a}. The class of self-stable social choice functions contains a number of appealing Condorcet extensions such as the minimal covering set and the essential set.


GM's Cruise Cars Are Back on the Road in Three US States--But Not for Ride-Hailing

WIRED

Cruise robotaxis are back on the road… well, kind of. Though General Motors pulled the plug on its self-driving taxi business last year, the automaker has been quietly repurposing a few of the vehicles as it seeks to develop new driver-assistance technologies. This week, WIRED spotted a GM Bolt electric hatchback on the San Francisco-Oakland Bay Bridge, and later saw a similar vehicle on Interstate 880 near Oakland. In each instance, the car was being driven by a human. The vehicle had "Mint" written on the hood, but didn't include any visually apparent Cruise branding.


4 Senate amendments to Trump megabill that failed -- and 1 that passed

FOX News

Fox News' Chad Pergram reports the latest on the Senate's vote-a-rama from Capitol Hill. Many senators failed to get their amendments across the finish line during the chamber's vote-a-rama on Monday, leaving the future of President Donald Trump's "big, beautiful bill" uncertain. Two key failures came from Sen. Susan Collins, R-Maine, and Sen. John Cornyn, R-Texas, with the former proposing a plan that would have boosted funding for rural hospitals and the latter calling for further cuts to Medicaid. Collins and Cornyn were far from the only lawmakers who had amendments fail, however. Here are some details on some of the unsuccessful efforts, plus one that succeeded with nearly unanimous support.


Republicans scrap deal in 'big, beautiful bill' to lower restrictions on states' AI regulations

FOX News

A deal that had been reached between Sens. Marsha Blackburn, R-Tenn., and Ted Cruz, R-Texas, over how states can regulate artificial intelligence has been pulled from President Donald Trump's "big, beautiful" bill. The collapsed agreement would have required states seeking to access hundreds of millions of dollars in AI infrastructure funding in the "big, beautiful" bill to refrain from adopting new regulations on the technology for five years, a compromise down from the original 10 years. It also included carveouts to regulate child sexual abuse material, unauthorized use of a person's likeness and other deceptive practices. Blackburn announced Monday night that she is withdrawing her support for the agreement. A deal between Sens. Marsha Blackburn and Ted Cruz over how states can regulate AI has been pulled from the "big, beautiful" bill.


Hamiltonian Formalism for Comparing Quantum and Classical Intelligence

Perrier, Elija

arXiv.org Artificial Intelligence

The prospect of AGI instantiated on quantum substrates motivates the development of mathematical frameworks that enable direct comparison of their operation in classical and quantum environments. To this end, we introduce a Hamiltonian formalism for describing classical and quantum AGI tasks as a means of contrasting their interaction with the environment. We propose a decomposition of AGI dynamics into Hamiltonian generators for core functions such as induction, reasoning, recursion, learning, measurement, and memory. This formalism aims to contribute to the development of a precise mathematical language for how quantum and classical agents differ via environmental interaction.


Republicans challenge 'irrelevant' budget office as it critiques Trump's 'beautiful bill'

FOX News

Will Cain tries to make sense of the divide over the'One Big Beautiful Bill.' Plus, Kennedy joins Will to discuss some of the most salacious stories in pop culture and politics. Both Republicans and Democrats have used analysis from the nonpartisan Congressional Budget Office as a political cudgel when it suits them, but with unfavorable reviews of President Donald Trump's "one big, beautiful bill" coming out, some in the GOP are questioning the relevancy of the agency. The CBO's latest analysis of the gargantuan tax cut and spending package found that the House Republican-authored super bill would add 2.4 trillion to the national deficit over the next decade and boot millions off of health insurance. Senate Majority Leader John Thune is signaling that changes are likely to the House's version of President Trump's "big, beautiful bill." Senate Republicans will now get their chance to tweak and change the legislation, and have vowed to do so, despite warnings from Trump to reshape the bill as little as possible.


Strategy-Augmented Planning for Large Language Models via Opponent Exploitation

Xu, Shuai, Cui, Sijia, Wang, Yanna, Xu, Bo, Wang, Qi

arXiv.org Artificial Intelligence

Efficiently modeling and exploiting opponents is a long-standing challenge in adversarial domains. Large Language Models (LLMs) trained on extensive textual data have recently demonstrated outstanding performance in general tasks, introducing new research directions for opponent modeling. Some studies primarily focus on directly using LLMs to generate decisions based on the elaborate prompt context that incorporates opponent descriptions, while these approaches are limited to scenarios where LLMs possess adequate domain expertise. To address that, we introduce a two-stage Strategy-Augmented Planning (SAP) framework that significantly enhances the opponent exploitation capabilities of LLM-based agents by utilizing a critical component, the Strategy Evaluation Network (SEN). Specifically, in the offline stage, we construct an explicit strategy space and subsequently collect strategy-outcome pair data for training the SEN network. During the online phase, SAP dynamically recognizes the opponent's strategies and greedily exploits them by searching best response strategy on the well-trained SEN, finally translating strategy to a course of actions by carefully designed prompts. Experimental results show that SAP exhibits robust generalization capabilities, allowing it to perform effectively not only against previously encountered opponent strategies but also against novel, unseen strategies. In the MicroRTS environment, SAP achieves a $85.35\%$ performance improvement over baseline methods and matches the competitiveness of reinforcement learning approaches against state-of-the-art (SOTA) rule-based AI. Our code is available at https://github.com/hsushuai/SAP.