ai chatbot
Signal's Creator Is Helping Encrypt Meta AI
Signal's Creator Is Helping Encrypt Meta AI Moxie Marlinspike says the technology powering his encrypted AI chatbot, Confer, will be integrated into Meta AI. The move could help protect the AI conversations of millions of people. Moxie Marlinspike, cofounder of the Signal Foundation, says his new privacy-focused AI platform, Confer, will be integrated into Meta AI. Moxie Marlinspike, the privacy advocate who created the secure communication app Signal and its widely used open source encryption protocol, said this week that his privacy-focused AI platform, Confer, will start incorporating its technology into Meta's AI systems. Every day, billions of chat messages sent through Signal, Meta's WhatsApp, and Apple's Messages are protected by end-to-end encryption .
- Asia > Middle East > Iran (0.05)
- North America > United States > New York (0.05)
- North America > United States > California (0.05)
- (5 more...)
The Fight to Hold AI Companies Accountable for Children's Deaths
The Fight to Hold AI Companies Accountable for Children's Deaths After a series of suicides allegedly linked to AI chatbots, one lawyer is trying to hold companies like OpenAI accountable. Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn't see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.
- North America > United States > Alabama (0.24)
- North America > United States > New York > Kings County > New York City (0.04)
- North America > United States > Georgia > Gordon County > Calhoun (0.04)
- (4 more...)
- Law (1.00)
- Information Technology (0.95)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.70)
- Government > Regional Government > North America Government > United States Government (0.47)
AI chatbots can effectively sway voters – in either direction
The potential for artificial intelligence to affect election results is a major public concern. Two new papers - with experiments conducted in four countries - demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters' preferences by 10 percentage points or more in many cases. The LLMs' persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates' policy positions. "LLMs can really move people's attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side," said David Rand, a senior author on both papers. "But those claims aren't necessarily accurate - and even arguments built on accurate claims can still mislead by omission."
- North America > United States (0.31)
- Asia > Singapore (0.05)
Race for AI is making Hindenburg-style disaster 'a real risk', says leading expert
Race for AI is making Hindenburg-style disaster'a real risk', says leading expert The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned. Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products' capabilities and potential flaws are fully understood. The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said. "It's the classic technology scenario," he said. "You've got a technology that's very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable."
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Europe > Ukraine (0.07)
- Oceania > Australia (0.05)
- North America > United States > New Jersey (0.05)
- Leisure & Entertainment > Sports (0.74)
- Information Technology (0.52)
- Information Technology > Communications > Social Media (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.59)
No free pass for internet platforms on child safety, Starmer says
No online platform will get a free pass on children's safety on the internet in new plans, Prime Minister Sir Keir Starmer has said. The government is pledging to close loopholes in existing laws designed to protect children online and will consult on a social media ban for under-16s as part of plans for online safety. There are also plans to introduce powers to speedily change the law in response to developing online behaviours, and to update legislation to preserve children's social media and online data - as campaigned for by the group Jools' Law. Opponents accused the government of inaction, and have called for Parliament to be given a vote on the social media ban for children. The government had already said it would launch the public consultation in March, seeking opinions about restricting children's access to AI chatbots and limiting infinite scrolling features for children - also known as doomscrolling.
- North America > United States (0.30)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (12 more...)
- Information Technology > Communications > Social Media (0.80)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.57)
Starmer to extend online safety rules to AI chatbots after Grok scandal
The government said it would close a legal loophole in the Online Safety Act. The government said it would close a legal loophole in the Online Safety Act. Starmer to announce'crackdown on vile illegal content created by AI' after scandal involving Elon Musk's Grok tool Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday. Emboldened by Elon Musk's X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a "crackdown on vile illegal content created by AI". With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would "move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law".
- Europe > United Kingdom (0.91)
- Europe > Ukraine (0.06)
- South America > Venezuela (0.05)
- (2 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Leisure & Entertainment > Sports (0.71)
- Government > Regional Government > North America Government > United States Government (0.50)
'I spoke to ChatGPT 8 times a day' - Gen Z's loneliness 'crisis'
'I spoke to ChatGPT 8 times a day' - Gen Z's loneliness'crisis' Working from home after years spent alone over Covid lockdowns, 23-year-old Paisley said he began to feel trapped, and felt only AI could help him. I lost the ability to socialise, he said, and like many in Gen Z, he turned to AI for company. At one point, I was talking to ChatGPT six, seven, eight times a day about my problems, I just couldn't get away from it, it was a dangerous slope. He shared his experience of loneliness with 22-year-old documentary maker Sam Tullen, who told the BBC what Paisley was going through was part of a wider Gen Z loneliness crisis. Gen Z, a term used for those born between 1997 and 2012, often referred to as the first'digital native' generation.
- North America > United States (0.30)
- North America > Central America (0.15)
- Oceania > Australia (0.05)
- (12 more...)
Meta Seeks to Bar Mentions of Mental Health--and Zuckerberg's Harvard Past--From Child Safety Trial
The trial starts soon in New Mexico's case against Meta--and the company is pulling out all the stops to protect its reputation. As Meta heads to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to have certain information excluded from the court proceedings. The company has petitioned the judge to exclude certain research studies and articles around social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and any references to Meta's financial resources, the personal activities of employees, and Mark Zuckerberg's time as a student at Harvard University. Meta's requests to exclude information, known as motions in limine, are a standard part of pretrial proceedings, in which a party can ask a judge to determine in advance which evidence or arguments are permissible in court. This is to ensure the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is granted a fair trial.
- North America > United States > New Mexico (0.49)
- North America > United States > California (0.15)
- South America > Venezuela (0.05)
- (3 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.53)
- North America > United States > Virginia (0.04)
- North America > United States > New York (0.04)
- North America > United States > Minnesota (0.04)
- Europe > Ukraine (0.04)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Communications > Social Media (0.99)
New Scientist changed the UK's freedom of information laws in 2025
New Scientist changed the UK's freedom of information laws in 2025 By requesting copies of the then-UK technology secretary's ChatGPT logs, New Scientist set a precedent for how freedom of information laws apply to chatbot interactions, helping to hold governments to account Our successful request for Peter Kyle's ChatGPT logs stunned observers When I fired off an email at the start of 2025, I hadn't intended to set a legal precedent for how the UK government handles its interactions with AI chatbots, but that is exactly what happened. It all began in January when I read an interview with the then-UK tech secretary Peter Kyle in . Trying to suggest he used first-hand the technology his department was set up to regulate, Kyle said that he would often have conversations with ChatGPT. AI may blunt our thinking skills - here's what you can do about it That got me wondering: could I obtain his chat history? Freedom of information (FOI) laws are often deployed to obtain emails and other documents produced by public bodies, but past precedent has suggested that some private data - such as search queries - aren't eligible for release in this way. I was interested to see which way the chatbot conversations would be categorised.
- Oceania > Australia (0.05)
- North America > Canada (0.05)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.05)
- Asia > China (0.05)
- Government (1.00)
- Information Technology > Security & Privacy (0.36)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.32)
- Health & Medicine > Therapeutic Area > Immunology (0.32)