charity
Sam Altman defends OpenAI in courtroom showdown with Elon Musk
Sam Altman is questioned by OpenAI's attorney, Bill Savitt, before Yvonne Gonzalez Rogers, a US district judge, at a federal courthouse in Oakland, California, on 12 May 2026 in a courtroom sketch. Sam Altman is questioned by OpenAI's attorney, Bill Savitt, before Yvonne Gonzalez Rogers, a US district judge, at a federal courthouse in Oakland, California, on 12 May 2026 in a courtroom sketch. The OpenAI CEO, Sam Altman, took the stand on Tuesday to defend himself and his company against a lawsuit by Elon Musk . Altman is set to be one of the final witnesses in the trial, which has pitted two of the tech industry's most powerful men against each other in a dramatic courtroom showdown. Musk has accused Altman and OpenAI of breaking the AI firm's founding agreement by restructuring it into a for-profit enterprise, alleging that Altman essentially swindled him into co-founding the company and providing tens of millions in financial backing.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Musk accuses OpenAI lawyer of trying to 'trick' him in combative testimony
Musk accuses OpenAI lawyer of trying to'trick' him in combative testimony In his second day on the stand, Elon Musk was at times combative under questioning by OpenAI's lawyer, whom he accused of asking overly complicated questions. Your questions are not simple, he told lawyer William Savitt at one point. They're designed to trick me essentially, Musk is suing fellow OpenAI co-founder Altman and the AI firm, alleging they misled him by shifting the organisation away from its non-profit roots toward a for-profit model. OpenAI says Musk is motivated by jealousy and regret for walking away from the company in 2018. It has also accused Musk, head of xAI, of trying to derail one of his key rivals.
- Europe > United Kingdom (0.50)
- North America > United States (0.30)
- Law > Litigation (0.54)
- Leisure & Entertainment > Sports (0.43)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.99)
Musk says basis of charitable giving at stake in OpenAI lawsuit
A trial pitting two founders of OpenAI - Sam Altman and Elon Musk - against each other has opened in California, with the sides presenting duelling narratives about the company's history and obligations to consumers. Musk, wearing a dark suit and tie, was asked by one of his lawyers what the lawsuit was about when he took the stand. It's actually very simple, he said. It's not okay to steal a charity... If it's okay to loot a charity, the entire foundation of charitable giving will be destroyed.
- Leisure & Entertainment (1.00)
- Law > Litigation (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.64)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.39)
Abusers using AI and digital tech to attack and control women, charity warns
Women's groups are calling for tech developers to take into account women's safety. Women's groups are calling for tech developers to take into account women's safety. Domestic abusers are increasingly using AI, smartwatches and other technology to attack and control their victims, a domestic abuse charity says. Record numbers of women who were abused and controlled through technology were referred to Refuge's specialist services during the last three months of 2025, including a 62% increase in the most complex cases to total 829 women. There was also a 24% increase in referrals of under-30s.
- Health & Medicine (1.00)
- Government > Regional Government (0.73)
- Leisure & Entertainment > Sports (0.72)
- Information Technology > Communications > Social Media (0.74)
- Information Technology > Artificial Intelligence > Applied AI (0.63)
Mum gives CPR to her baby with rare condition after seizure in Tesco
A baby with a rare neurological disorder, airlifted to hospital after collapsing in a supermarket, is not out of the woods yet, said his father. Seven-month-old Rupert Smith, from Broughton, Flintshire, stopped breathing in a Tesco store in Broughton Park, on Monday. His mother Siobhan, 35, immediately called for help and administered CPR before emergency services, including paramedics, police and an air ambulance arrived. Rupert, who has a disorder called alternating hemiplegia of childhood (AHC), was flown to Alder Hey Children's Hospital in Liverpool for treatment. Dad Dave Smith said Rupert had continued to have quite significant seizures [in hospital] so they have been giving him medication and he has undergone various different tests.
- North America > United States (0.49)
- Europe > United Kingdom > Wales > Flintshire (0.26)
- North America > Central America (0.15)
- (14 more...)
Elon Musk's Grok AI appears to have made child sexual imagery, says charity
Elon Musk's Grok AI appears to have made child sexual imagery, says charity The Internet Watch Foundation (IWF) charity says its analysts have discovered criminal imagery of girls aged between 11 and 13 which appears to have been created using Grok. The AI tool is owned by Elon Musk's firm xAI. It can be accessed either through its website and app, or through the social media platform X. The IWF said it found sexualised and topless imagery of girls on a dark web forum in which users claimed they used Grok to create the imagery. The BBC has approached X and xAI for comment.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (14 more...)
UK to ban deepfake AI 'nudification' apps
The UK government says it will ban so-called nudification apps as part of efforts to tackle misogyny online. New laws - announced on Thursday as part of a wider strategy to halve violence against women and girls - will make it illegal to create and supply AI tools letting users edit images to seemingly remove someone's clothing. The new offences would build on existing rules around sexually explicit deepfakes and intimate image abuse, the government said. Women and girls deserve to be safe online as well as offline, said Technology Secretary Liz Kendall. We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (15 more...)
- Law (1.00)
- Information Technology > Security & Privacy (0.95)
- Government > Regional Government > Europe Government > United Kingdom Government (0.69)
Computer maker HP to cut up to 6,000 jobs by 2028 as it turns to AI
HP has announced a lower-than-expected profit outlook for the coming year. HP has announced a lower-than-expected profit outlook for the coming year. Up to 6,000 jobs are to go at HP worldwide in the next three years as the US computer and printer maker increasingly adopts AI to speed up product development. Announcing a lower-than-expected profit outlook for the coming year, HP said it would cut between 4,000 and 6,000 jobs by the end of October 2028. It has about 56,000 employees.
- Oceania > Australia (0.05)
- North America > United States > California (0.05)
- Europe > United Kingdom (0.05)
- (2 more...)
- Information Technology (0.73)
- Leisure & Entertainment > Sports (0.72)
- Banking & Finance (0.72)
- Government > Regional Government > North America Government > United States Government (0.49)
UK seeking to curb AI child sex abuse imagery with tougher testing
The UK government will allow tech firms and child safety charities to proactively test artificial intelligence tools to make sure they cannot create child sexual abuse imagery. An amendment to the Crime and Policing Bill announced on Wednesday would enable authorised testers to assess models for their ability to generate illegal child sexual abuse material (CSAM) prior to their release. Technology Secretary Liz Kendall said the measures would ensure AI systems can be made safe at the source - though some campaigners argue more still needs to be done. It comes as the Internet Watch Foundation (IWF) said the number of AI-related CSAM reports had doubled over the past year. The charity, one of only a few in the world licensed to actively search for child abuse content online, said it had removed 426 pieces of reported material between January and October 2025.
- North America > United States (0.50)
- South America (0.15)
- North America > Central America (0.15)
- (13 more...)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.35)
Evaluating & Reducing Deceptive Dialogue From Language Models with Multi-turn RL
Abdulhai, Marwa, Cheng, Ryan, Shrivastava, Aryansh, Jaques, Natasha, Gal, Yarin, Levine, Sergey
Large Language Models (LLMs) interact with millions of people worldwide in applications such as customer support, education and healthcare. However, their ability to produce deceptive outputs, whether intentionally or inadvertently, poses significant safety concerns. The unpredictable nature of LLM behavior, combined with insufficient safeguards against hallucination, misinformation, and user manipulation, makes their misuse a serious, real-world risk. In this paper, we investigate the extent to which LLMs engage in deception within dialogue, and propose the belief misalignment metric to quantify deception. We evaluate deception across four distinct dialogue scenarios, using five established deception detection metrics and our proposed metric. Our findings reveal this novel deception measure correlates more closely with human judgments than any existing metrics we test. Additionally, our benchmarking of eight state-of-the-art models indicates that LLMs naturally exhibit deceptive behavior in approximately 26% of dialogue turns, even when prompted with seemingly benign objectives. When prompted to deceive, LLMs are capable of increasing deceptiveness by as much as 31% relative to baselines. Unexpectedly, models trained with RLHF, the predominant approach for ensuring the safety of widely-deployed LLMs, still exhibit deception at a rate of 43% on average. Given that deception in dialogue is a behavior that develops over an interaction history, its effective evaluation and mitigation necessitates moving beyond single-utterance analyses. We introduce a multi-turn reinforcement learning methodology to fine-tune LLMs to reduce deceptive behaviors, leading to a 77.6% reduction compared to other instruction-tuned models.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States (0.14)
- Africa > Kenya (0.04)
- (4 more...)