Goto

Collaborating Authors

 subcommittee


CyclicReflex: Improving Large Reasoning Models via Cyclical Reflection Token Scheduling

Fan, Chongyu, Zhang, Yihua, Jia, Jinghan, Hero, Alfred, Liu, Sijia

arXiv.org Artificial Intelligence

Large reasoning models (LRMs), such as OpenAI's o1 and DeepSeek-R1, harness test-time scaling to perform multi-step reasoning for complex problem-solving. This reasoning process, executed before producing final answers, is often guided by special juncture tokens or textual segments that prompt self-evaluative reflection. We refer to these transition markers and reflective cues as "reflection tokens" (e.g., "wait", "but", "alternatively"). In this work, we treat reflection tokens as a "resource" and introduce the problem of resource allocation, aimed at improving the test-time compute performance of LRMs by adaptively regulating the frequency and placement of reflection tokens. Through empirical analysis, we show that both excessive and insufficient use of reflection tokens, referred to as over-reflection and under-reflection, can degrade model performance. To better understand and manage this trade-off, we draw an analogy between reflection token usage and learning rate scheduling in optimization. Building on this insight, we propose cyclical reflection token scheduling (termed CyclicReflex), a decoding strategy that dynamically modulates reflection token logits using a position-dependent triangular waveform. Experiments on MATH500, AIME2024/2025, and AMC2023 demonstrate that CyclicReflex consistently improves performance across model sizes (1.5B-8B), outperforming standard decoding and more recent approaches such as TIP (thought switching penalty) and S1. Codes are available at https://github.com/OPTML-Group/CyclicReflex.


Elon Musk's Doge conflicts of interest worth 2.37bn, Senate report says

The Guardian

Elon Musk and his companies face at least 2.37bn in legal exposure from federal investigations, litigation and regulatory oversight, according to a new report from Senate Democrats. The report attempts to put a number to Musk's many conflicts of interest through his work with his so-called "department of government efficiency" (Doge), warning that he may seek to use his influence to avoid legal liability. The report, which was published on Monday by Democratic members of the Senate homeland security committee's permanent subcommittee on investigations, looked at 65 actual or potential actions against Musk across 11 separate agencies. Investigators calculated the financial liabilities Musk and his companies, such as Tesla, SpaceX and Neuralink, may face in 45 of those actions. Since Donald Trump won re-election last year and Musk took on the role of de facto head of Doge in January, ethics watchdogs and Democratic officials have warned that the Tesla CEO could use his power to oust regulators and quash investigations into his companies.


Experts Warn Congress of Dangers AI Poses to Journalism

TIME - Tech

AI poses a grave threat to journalism, experts warned Congress at a hearing on Wednesday. Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law about how AI is contributing to the big tech-fueled decline of journalism. They also talked about intellectual property issues arising from AI models being trained on the work of journalists, and raised alarms about the increasing dangers of AI-powered misinformation. "The rise of big tech has been directly responsible for the decline in local news," said Senator Richard Blumenthal, a Connecticut Democrat and chair of the subcommittee. "First, Meta, Google and OpenAI are using the hard work of newspapers and authors to train their AI models without compensation or credit. Adding insult to injury, those models are then used to compete with newspapers and broadcasters, cannibalizing readership and revenue from the journalistic institutions that generate the content in the first place."


US Senate begins collecting evidence on how AI could thwart robocalls

Engadget

Robocalls are rampant, using AI and other tools to disrupt day-to-day life and scam Americans out of their money through impersonations of family members, phone providers and more. On October 24, the Senate Commerce Committee's Subcommittee on Communications, Media, and Broadband heard the latest issue and solution floating around: AI. Currently, bad actors are using AI to steal people's voices and repurpose them in calls to loved ones -- often presenting a state of distress. This advancement goes beyond seemingly real calls from banks and credit card companies, providing a disturbing and jarring experience: not knowing if you're speaking to someone you know. The financial repercussions (not to mention potential mental distress) are tremendous. Senator Ben Ray Luján, chair of the subcommittee, estimates that individuals nationwide receive 1.5 billion to 3 billion scam calls monthly, defrauding Americans out of $39 billion in 2022.


NASA can't explain 'handful' of UFO sightings as it searches for 'signs of life'

FOX News

Fox News Headlines 24/7 sports reporter Eric Messersmith joins "Fox News @ Night" to discuss the Pentagon "one-stop shop" for declassified information about UFOs and NIL rules. NASA is looking for "signs of life past or present," NASA Administrator Bill Nelson said, and there are a "small handful" of incidents that "we don't know what they are." As it stands today, NASA doesn't have enough high-quality data to make a "definitive, scientific conclusion" about the origin of UFOs, according to the space agency's independent UAP research team's 36-page report that was released Thursday. "If you ask me, do I believe there's life in a universe that is so vast that it's hard for me to comprehend how big it is? My personal answer is yes," Nelson said.


Senate urged to punish US companies that help China build its AI-driven 'surveillance state'

FOX News

AGI, while powerful, could have negative consequences, warned Diveplane CEO Mike Capps and Liberty Blockchain CCO Christopher Alexander. U.S. companies that give China artificial intelligence-driven technology to violate the human rights of its citizens need to be punished by Congress with prison terms for U.S. executives, a witness told senators in a hearing Tuesday. Geoffrey Cain, senior fellow at the Foundation for American Innovation, warned at a Senate Judiciary subcommittee hearing that AI is helping to power China's growing "surveillance state" and said U.S. companies have contributed to this human rights problem. "China built its AI surveillance apparatus with the connivance and complacency of major American technology firms," Cain said in his prepared remarks. "The science corporation ThermoFisher, for example, was caught selling DNA collection equipment directly to Xinjiang police authorities, who used them for mass gathering of genetic data on the minority Uyghur population. "Since the late 1990s, Microsoft has established itself as the training ground for China's AI elites through its Beijing-based laboratory, Microsoft Research Asia," he added. "The laboratory has trained many of the AI leaders and developers who went on to found or join the executive leadership of rights-abusing firms, such as Sensetime, Megvii and iFlyTek." Chinese President Xi Jinping is overseeing an AI-driven surveillance state, according to a witness at a Senate hearing Tuesday who said U.S. companies that help China should be punished. Cain's group, the Foundation for American Innovation, said it was founded to ensure technology is "aligned to serve human ends: promoting individual freedom, supporting strong institutions, advancing national security, and unleashing economic prosperity.


Five key takeaways from OpenAI's CEO Sam Altman's Senate hearing

Al Jazeera

Sam Altman, the chief executive of ChatGPT's OpenAI, testified before members of a Senate subcommittee on Tuesday about the need to regulate the increasingly powerful artificial intelligence technology being created inside his company and others like Google and Microsoft. The three-hour-long hearing touched on several aspects of the risks that generative AI could pose to society, how it would affect the jobs market and why regulation by governments would be needed. Tuesday's hearing will be the first in a series of hearings to come as lawmakers grapple with drafting regulations around AI to address its ethical, legal and national security concerns. Senator Richard Blumenthal from Connecticut opened the proceedings with an AI-generated audio recording that sounded just like him. "Too often we have seen what happens when technology outpaces regulation. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want," the voice said.


OpenAI CEO Sam Altman admits his biggest fear for AI: 'It can go quite wrong'

FOX News

OpenAI CEO Sam Altman discussed the risks and benefits of AI at a Senate Judiciary subcommittee hearing on May 16, 2023. OpenAI CEO Sam Altman told a panel of senators Tuesday that his greatest fear as his company develops artificial intelligence capabilities is that is causes major harmful disruption for people, and acknowledged that AI has this potential downside if it isn't properly regulated. "My worst fears are that we cause significant – we, the field, the technology industry – cause significant harm to the world," Altman told a Senate Judiciary subcommittee. "I think that could happen in a lot of different ways. It's why we started the company."


OpenAI CEO Sam Altman faces Senate panel as pressure builds to regulate AI

FOX News

Two companies are coming together to develop humanoid robots with AI that will be able to perform jobs from manufacturing to health care professions. Senators on Tuesday will grill OpenAI CEO Sam Altman about the "perils and promise" of artificial intelligence as part of a push to better understand this quickly emerging technology and impose some kind of regulatory regime around it. Altman will testify before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, which will mark his first time as a witness at a public congressional hearing. His testimony comes several weeks after Senate Majority Leader Chuck Schumer, D-N.Y., said he is working on a regulatory blueprint and as several members of the House and Senate have talked about the need for rules of the road for AI. Members of the subcommittee have made it clear over the last week that they want to learn more about AI to make sure it's used safely and responsibly.


AI pause cedes power to China, harms development of 'democratic' AI, experts warn Senate

FOX News

Tech experts Louis Rosenberg, Neil Sahota, and Jake Denton discuss the dangers of AI taking jobs and influencing the way people think. Halting the development of artificial intelligence in America would only give more power to China to develop its own AI technology that favors its communist political system and increase the chances that China's AI system becomes the global standard, technology experts warned senators this week. A subcommittee of the Senate Armed Services Committee heard testimony from AI experts on Wednesday, nearly a month after Elon Musk, Steve Wozniak and dozens of other tech luminaries called for a "pause" in AI development until its "profound risks to society and humanity" are better understood. But at the subcommittee hearing, experts warned of the dangers of such a pause, especially the risk that China might continue to develop AI and dominate the field while the U.S. delays. Sen. Mike Rounds, R-S.D., said he opposes the idea of a development pause, and asked if the U.S. should "expect that other competitors around the world would consider taking a break."