amendment
Starmer vows to fast-track social media law but says under-16s ban not definite
Prime minister says action will be taken on young people's social media access in'months, not years' What social media restrictions has Keir Starmer announced? Keir Starmer has pledged action on young people's access to social media in "months, not years", while saying this did not necessarily mean a complete ban on access for under-16s. Speaking at an event in London after the government promised to extend the crackdown to AI chatbots that place children at risk, Starmer said the issue was nuanced and that a ban was not definite, noting concerns from charities such as the NSPCC. "I think this is such an important issue that we need to go into it with a ban as a possibility," he told a community hub in Putney, saying he would "definitely want to look at the evidence" gathered during a three-month consultation. He added: "There are powerful arguments on both sides. Some people simply say just get all under-16s off social media, and that's the end of it. NSPCC, obviously an organisation very concerned with children's protection, says no, it'll push children to even darker places. "Others - I was with young people this morning, 15-and 16-year-olds who are actually going to be affected by this - they said to me, look we get our news from social media, we don't read the papers, and therefore you'll stop us accessing the news.
- North America > United States (0.30)
- Europe > United Kingdom > Wales (0.06)
- Europe > United Kingdom > Scotland (0.06)
- (6 more...)
- Government > Regional Government (0.73)
- Leisure & Entertainment > Sports (0.72)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.57)
Believe It or Not: How Deeply do LLMs Believe Implanted Facts?
Slocum, Stewart, Minder, Julian, Dumas, Clément, Sleight, Henry, Greenblatt, Ryan, Marks, Samuel, Wang, Rowan
Knowledge editing techniques promise to implant new factual knowledge into large language models (LLMs). But do LLMs really believe these facts? We develop a framework to measure belief depth and use it to evaluate the success of knowledge editing techniques. We operationalize belief depth as the extent to which implanted knowledge 1) generalizes to related contexts (e.g. Fermi estimates several logical steps removed), 2) is robust to self-scrutiny and direct challenge, and 3) is represented similarly to genuine knowledge (as measured by linear probes). Our evaluations show that simple prompting and mechanistic editing techniques fail to implant knowledge deeply. In contrast, Synthetic Document Finetuning (SDF) - where models are trained on LLM-generated documents consistent with a fact - often succeeds at implanting beliefs that behave similarly to genuine knowledge. However, SDF's success is not universal, as implanted beliefs that contradict basic world knowledge are brittle and representationally distinct from genuine knowledge. Overall, our work introduces measurable criteria for belief depth and enables the rigorous evaluation necessary for deploying knowledge editing in real-world applications.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > New Jersey > Middlesex County > New Brunswick (0.14)
- North America > United States > Kansas (0.05)
- (25 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.67)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Education (1.00)
- (7 more...)
Multi-Agent Tool-Integrated Policy Optimization
Mo, Zhanfeng, Li, Xingxuan, Chen, Yuntao, Bing, Lidong
Large language models (LLMs) increasingly rely on multi-turn tool-integrated planning for knowledge-intensive and complex reasoning tasks. Existing implementations typically rely on a single agent, but they suffer from limited context length and noisy tool responses. A natural solution is to adopt a multi-agent framework with planner- and worker-agents to manage context. However, no existing methods support effective reinforcement learning post-training of tool-integrated multi-agent frameworks. To address this gap, we propose Multi-Agent Tool-Integrated Policy Optimization (MATPO), which enables distinct roles (planner and worker) to be trained within a single LLM instance using role-specific prompts via reinforcement learning. MATPO is derived from a principled credit assignment mechanism across planner and worker rollouts. This design eliminates the need to deploy multiple LLMs, which would be memory-intensive, while preserving the benefits of specialization. Experiments on GAIA-text, WebWalkerQA, and FRAMES show that MATPO consistently outperforms single-agent baselines by an average of 18.38% relative improvement in performance and exhibits greater robustness to noisy tool outputs. Our findings highlight the effectiveness of unifying multiple agent roles within a single LLM and provide practical insights for stable and efficient multi-agent RL training.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York (0.04)
- Education > Educational Setting (0.66)
- Government > Regional Government > North America Government > United States Government (0.46)
- Law > Business Law > Bankruptcy Law (0.46)
- Law > Government & the Courts (0.46)
The Revised Laws of Robotics
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot may not injure a human being or, through inaction, allow a human being to come to harm, unless that human being did something to really annoy the human being who programmed it. If it was programmed by another robot, then anything goes. Even if a robot is insured with RobotCare, a scratched or cracked screen will not be covered. For that, see Gary at the little stand in the middle of the mall.
4 Senate amendments to Trump megabill that failed -- and 1 that passed
Fox News' Chad Pergram reports the latest on the Senate's vote-a-rama from Capitol Hill. Many senators failed to get their amendments across the finish line during the chamber's vote-a-rama on Monday, leaving the future of President Donald Trump's "big, beautiful bill" uncertain. Two key failures came from Sen. Susan Collins, R-Maine, and Sen. John Cornyn, R-Texas, with the former proposing a plan that would have boosted funding for rural hospitals and the latter calling for further cuts to Medicaid. Collins and Cornyn were far from the only lawmakers who had amendments fail, however. Here are some details on some of the unsuccessful efforts, plus one that succeeded with nearly unanimous support.
- North America > United States > Texas (0.27)
- North America > United States > Maine (0.25)
Ban on AI Regulations in Trump's Tax Bill Carries a Huge Environmental Cost
A data center for cryptocurrency mining, cloud services, and AI computing in Stutsman County, North Dakota.halbergman/Getty This story was originally published by the Guardian and is reproduced here as part of the Climate Desk collaboration. Republicans are pushing to pass a major spending bill that includes provisions to prevent states from enacting regulations on artificial intelligence. Such untamed growth in AI will take a heavy toll upon the world's dangerously overheating climate, experts have warned. About 1 billion tons of planet-heating carbon dioxide are set to be emitted in the US just from AI over the next decade if no restraints are placed on the industry's enormous electricity consumption, according to estimates by researchers at Harvard University and provided to the Guardian.
- North America > United States > North Dakota > Stutsman County (0.25)
- North America > United States > Massachusetts (0.06)
- North America > United States > Texas (0.05)
- (3 more...)
- Law > Statutes (1.00)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.97)
- Information Technology > Services (0.74)
I lost my 16-year-old son to suicide from addictive AI algorithms. We can't let Big Tech destroy our children
Florida Attorney General James Uthmeier joins'Fox & Friends First' to discuss a federal judge moving to halt the state's social media ban for children and weigh in on the fight to protect women's sports. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255). When my 16-year-old son Mason was going through a painful breakup, he did what many kids of his generation do: He turned to TikTok. Mason used the social media site to search for positive affirmations and inspirational quotes. Instead, TikTok's algorithm sent him the most horrific content urging suicide and self-harm.
- North America > United States > Utah (0.05)
- North America > United States > New York (0.05)
- Government (1.00)
- Law > Statutes (0.74)
- Law > Government & the Courts (0.71)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.56)
Denmark Seeks to Give People Copyright to Their Own Features in Effort to Combat AI Deepfakes
The Danish government revealed Thursday that a broad coalition of legislators are working on a bill that would make deepfakes illegal to share and put legal protections in place to prevent AI material depicting a person from being disseminated without their consent. "In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI," Danish culture minister, Jakob Engel-Schmidt, told The Guardian. The Danish department of culture will submit a proposed amendment for consultation this summer. The bill, if enacted, would issue "severe fines" for online platforms that do not abide by the new law. The Danish government said that parodies and satire would not be affected by the proposed amendment.
- Information Technology > Security & Privacy (0.91)
- Government > Regional Government > Europe Government > Denmark Government (0.84)
Trump's tax bill seeks to prevent AI regulations. Experts fear a heavy toll on the planet
US Republicans are pushing to pass a major spending bill that includes provisions to prevent states from enacting regulations on artificial intelligence. Such untamed growth in AI will take a heavy toll upon the world's dangerously overheating climate, experts have warned. About 1bn tons of planet-heating carbon dioxide are set to be emitted in the US just from AI over the next decade if no restraints are placed on the industry's enormous electricity consumption, according to estimates by researchers at Harvard University and provided to the Guardian. This 10-year timeframe, a period of time in which Republicans want a "pause" of state-level regulations upon AI, will see so much electricity use in data centers for AI purposes that the US will add more greenhouse gases to the atmosphere than Japan does annually, or three times the yearly total from the UK. The exact amount of emissions will depend on power plant efficiency and how much clean energy will be used in the coming years, but the blocking of regulations will also be a factor, said Gianluca Guidi, visiting scholar at the Harvard TH Chan School of Public Health.
- Asia > Japan (0.25)
- North America > United States > Massachusetts (0.06)
- North America > United States > Texas (0.05)
- (2 more...)
- Law > Statutes (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy > Power Industry (0.94)
Intentionally Unintentional: GenAI Exceptionalism and the First Amendment
Atkinson, David, Hwang, Jena D., Morrison, Jacob
This paper challenges the assumption that courts should grant First Amendment protections to outputs from large generative AI models, such as GPT-4 and Gemini. We argue that because these models lack intentionality, their outputs do not constitute speech as understood in the context of established legal precedent, so there can be no speech to protect. Furthermore, if the model outputs are not speech, users cannot claim a First Amendment speech right to receive the outputs. We also argue that extending First Amendment rights to AI models would not serve the fundamental purposes of free speech, such as promoting a marketplace of ideas, facilitating self-governance, or fostering self-expression. In fact, granting First Amendment protections to AI models would be detrimental to society because it would hinder the government's ability to regulate these powerful technologies effectively, potentially leading to the unchecked spread of misinformation and other harms.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > Ohio (0.04)
- North America > United States > Minnesota (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.34)