charity
UK to ban deepfake AI 'nudification' apps
The UK government says it will ban so-called nudification apps as part of efforts to tackle misogyny online. New laws - announced on Thursday as part of a wider strategy to halve violence against women and girls - will make it illegal to create and supply AI tools letting users edit images to seemingly remove someone's clothing. The new offences would build on existing rules around sexually explicit deepfakes and intimate image abuse, the government said. Women and girls deserve to be safe online as well as offline, said Technology Secretary Liz Kendall. We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (15 more...)
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.69)
- Information Technology > Security & Privacy (0.59)
- (2 more...)
Computer maker HP to cut up to 6,000 jobs by 2028 as it turns to AI
HP has announced a lower-than-expected profit outlook for the coming year. HP has announced a lower-than-expected profit outlook for the coming year. Up to 6,000 jobs are to go at HP worldwide in the next three years as the US computer and printer maker increasingly adopts AI to speed up product development. Announcing a lower-than-expected profit outlook for the coming year, HP said it would cut between 4,000 and 6,000 jobs by the end of October 2028. It has about 56,000 employees.
- Oceania > Australia (0.05)
- North America > United States > California (0.05)
- Europe > United Kingdom (0.05)
- (2 more...)
- Information Technology (0.73)
- Leisure & Entertainment > Sports (0.72)
- Banking & Finance (0.72)
- Government > Regional Government > North America Government > United States Government (0.49)
UK seeking to curb AI child sex abuse imagery with tougher testing
The UK government will allow tech firms and child safety charities to proactively test artificial intelligence tools to make sure they cannot create child sexual abuse imagery. An amendment to the Crime and Policing Bill announced on Wednesday would enable authorised testers to assess models for their ability to generate illegal child sexual abuse material (CSAM) prior to their release. Technology Secretary Liz Kendall said the measures would ensure AI systems can be made safe at the source - though some campaigners argue more still needs to be done. It comes as the Internet Watch Foundation (IWF) said the number of AI-related CSAM reports had doubled over the past year. The charity, one of only a few in the world licensed to actively search for child abuse content online, said it had removed 426 pieces of reported material between January and October 2025.
- North America > United States (0.50)
- South America (0.15)
- North America > Central America (0.15)
- (13 more...)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.35)
Evaluating & Reducing Deceptive Dialogue From Language Models with Multi-turn RL
Abdulhai, Marwa, Cheng, Ryan, Shrivastava, Aryansh, Jaques, Natasha, Gal, Yarin, Levine, Sergey
Large Language Models (LLMs) interact with millions of people worldwide in applications such as customer support, education and healthcare. However, their ability to produce deceptive outputs, whether intentionally or inadvertently, poses significant safety concerns. The unpredictable nature of LLM behavior, combined with insufficient safeguards against hallucination, misinformation, and user manipulation, makes their misuse a serious, real-world risk. In this paper, we investigate the extent to which LLMs engage in deception within dialogue, and propose the belief misalignment metric to quantify deception. We evaluate deception across four distinct dialogue scenarios, using five established deception detection metrics and our proposed metric. Our findings reveal this novel deception measure correlates more closely with human judgments than any existing metrics we test. Additionally, our benchmarking of eight state-of-the-art models indicates that LLMs naturally exhibit deceptive behavior in approximately 26% of dialogue turns, even when prompted with seemingly benign objectives. When prompted to deceive, LLMs are capable of increasing deceptiveness by as much as 31% relative to baselines. Unexpectedly, models trained with RLHF, the predominant approach for ensuring the safety of widely-deployed LLMs, still exhibit deception at a rate of 43% on average. Given that deception in dialogue is a behavior that develops over an interaction history, its effective evaluation and mitigation necessitates moving beyond single-utterance analyses. We introduce a multi-turn reinforcement learning methodology to fine-tune LLMs to reduce deceptive behaviors, leading to a 77.6% reduction compared to other instruction-tuned models.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States (0.14)
- Africa > Kenya (0.04)
- (4 more...)
What is autism and what are Trump's unproven claims about a paracetamol link?
What is autism and what are Trump's unproven claims about a Tylenol link? US President Donald Trump has claimed there is a link between the use of painkiller Tylenol by pregnant women and an increased risk of autism in some children. Going against current scientific advice and medical opinion, he said the drug, known as paracetamol in many countries, is no good and women should fight like hell to only take it in extreme cases, such as for high fevers. Medical bodies say the drug is safe and that it remains the best treatment for pain and fever during pregnancy. What is autism and how is it diagnosed?
- North America > United States (1.00)
- South America (0.14)
- North America > Central America (0.14)
- (16 more...)
- Health & Medicine > Therapeutic Area > Obstetrics/Gynecology (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Autism (1.00)
- Health & Medicine > Therapeutic Area > Genetic Disease (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
How Do LLMs Persuade? Linear Probes Can Uncover Persuasion Dynamics in Multi-Turn Conversations
Jaipersaud, Brandon, Krueger, David, Lubana, Ekdeep Singh
Large Language Models (LLMs) have started to demonstrate the ability to persuade humans, yet our understanding of how this dynamic transpires is limited. Recent work has used linear probes, lightweight tools for analyzing model representations, to study various LLM skills such as the ability to model user sentiment and political perspective. Motivated by this, we apply probes to study persuasion dynamics in natural, multi-turn conversations. We leverage insights from cognitive science to train probes on distinct aspects of persuasion: persuasion success, persuadee personality, and persuasion strategy. Despite their simplicity, we show that they capture various aspects of persuasion at both the sample and dataset levels. For instance, probes can identify the point in a conversation where the persuadee was persuaded or where persuasive success generally occurs across the entire dataset. We also show that in addition to being faster than expensive prompting-based approaches, probes can do just as well and even outperform prompting in some settings, such as when uncovering persuasion strategy. This suggests probes as a plausible avenue for studying other complex behaviours such as deception and manipulation, especially in multi-turn settings and large-scale dataset analysis where prompting-based methods would be computationally inefficient.
- North America > Haiti (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Europe > Latvia > Lubāna Municipality > Lubāna (0.04)
- (13 more...)
- Research Report > New Finding (1.00)
- Personal > Interview (1.00)
- Research Report > Experimental Study (0.67)
- Education (1.00)
- Media (0.67)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
Travis Kelce shouts out Pat McAfee after nixing 'falsely claimed' report with 'AI stuff'
Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. Pat McAfee initially praised Travis Kelce for making a 3.3 million donation for the homeless, but the tight end admitted that the report was not true. The Daily Mail recently issued that report, which led to McAfee telling his audience on his show last week that the Kansas City Chiefs tight end was "making the world a better place." But on his own podcast, Kelce said the report was not true. "I got to make a little statement in the'don't believe everything you read, kids' category realm that you see online," Kelce began.
Elon Musk says he'll drop his 97bn bid for OpenAI if it remains a non-profit
Elon Musk says he will abandon his 97.4bn offer to buy the non-profit behind OpenAI if the ChatGPT maker drops its plan to convert into a for-profit company. "If OpenAI, Inc's Board is prepared to preserve the charity's mission and stipulate to take the'for sale' sign off its assets by halting its conversion, Musk will withdraw the bid," lawyers for the billionaire said in a filing to a California court on Wednesday. "Otherwise, the charity must be compensated by what an arms-length buyer will pay for its assets." Musk and a group of investors made their offer earlier this week, in the latest twist to a dispute with the artificial intelligence company that he helped found a decade ago. OpenAI is controlled by a non-profit board bound to its original mission of safely building "better-than-human" AI for public benefit.
- North America > United States > California (0.29)
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.73)
Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations
Hong, Joey, Lin, Jessica, Dragan, Anca, Levine, Sergey
Recent progress on large language models (LLMs) has enabled dialogue agents to generate highly naturalistic and plausible text. However, current LLM language generation focuses on responding accurately to questions and requests with a single effective response. In reality, many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion. Accounting for how an agent can effectively steer a conversation is a crucial ability in many dialogue tasks, from healthcare to preference elicitation. Existing methods for fine-tuning dialogue agents to accomplish such tasks would rely on curating some amount of expert data. However, doing so often requires understanding the underlying cognitive processes of the conversational partner, which is a skill neither humans nor LLMs trained on human data can reliably do. Our key insight is that while LLMs may not be adept at identifying effective strategies for steering conversations a priori, or in the middle of an ongoing conversation, they can do so post-hoc, or in hindsight, after seeing how their conversational partner responds. We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations. We apply our approach to two domains that require understanding human mental state, intelligent interaction, and persuasion: mental health support, and soliciting charitable donations. Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
- Asia > Middle East > Syria (0.04)
- Africa > Nigeria (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (5 more...)
- Questionnaire & Opinion Survey (0.68)
- Research Report > New Finding (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)