software
Backlash builds over NHS plan to hide source code from AI hacking risk
NHS England is pulling its open-source software from the internet because of fears around computer-hacking AI models like Mythos. A decision by NHS England to withdraw open-source code created with UK taxpayer funds because of the risk posed by computer-hacking AI models is attracting growing backlash. Last month, Mythos, an AI created by technology firm Anthropic, was widely reported to be capable of discovering flaws in virtually any software, potentially allowing hackers to break into systems running it. NHS England has now told staff that existing and future software must be pulled from public view and kept behind closed doors by 11 May because of this risk. The decision goes against the NHS service standard, which requires that staff make any software they produce open-source so that tools can be built upon, improved and used without the need for duplicated effort.
- Information Technology > Software (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
Disneyland Now Uses Face Recognition on Visitors
Plus: The NSA tests Anthropic's Mythos Preview to find vulnerabilities, a Finnish teen is charged over the Scattered Spider hacking spree, and more. A gunman attempted to enter the White House Correspondents' Dinner in Washington, DC, last weekend, while President Donald Trump, Vice President JD Vance, and other administration officials were in attendance. Media reports and Trump himself quickly identified the suspected shooter as 31-year-old engineer and computer scientist Cole Tomas Allen. The California resident was arrested at the scene on Saturday and appeared Monday in the US District Court for the District of Columbia to face three federal charges: attempting to assassinate the president, transportation of a firearm in interstate commerce, and discharge of a firearm during a crime of violence. The authentication standards body known as the FIDO Alliance announced working groups this week along with Google and Mastercard to develop technical guardrails for validating and protecting transactions initiated by an AI agent .
- North America > United States > California (0.36)
- North America > United States > District of Columbia > Washington (0.25)
NHS England rushes to hide software over AI hacking fears
NHS England is hurriedly withdrawing all the software it has written from public view because of the perceived risk of hacking from cutting-edge artificial intelligence. Security experts say the move is unnecessary and counterproductive. Software produced by the National Health Service has previously been made open-source and listed on GitHub because it is created with public money. This allows other organisations to build upon it and make better services more cheaply without duplicating effort. But NHS England has issued new guidance to staff, which has been shared with, that demands existing and future software be pulled from public view and kept behind closed doors.
The Bloomberg Terminal Is Getting an AI Makeover, Like It or Not
WIRED spoke with Bloomberg's chief technology officer about the big, chatbot-style changes coming to the iconic platform for traders. For its famous intractability, the Bloomberg Terminal has long inspired devotion, bordering on obsession . Among traders, the ability to chart a path through the software's dizzying scrolls of numbers and text to isolate far-flung information is the mark of a seasoned professional. But as a greater mass of data is fed into the Terminal--not only earnings and asset prices, but weather forecasts, shipping logs, factory locations, consumer spending patterns, private loans, and so on--valuable information is being lost. "It has become more and more untenable," says Shawn Edwards, chief technology officer at Bloomberg.
- Europe (0.29)
- North America > United States > California (0.15)
Met investigates hundreds of officers after using Palantir AI tool
The Met said corruption was the most consistent offence detected, with misconduct related to'abuse of the IT system that rosters shifts by police officers for personal or financial gain'. The Met said corruption was the most consistent offence detected, with misconduct related to'abuse of the IT system that rosters shifts by police officers for personal or financial gain'. Sat 25 Apr 2026 11.34 EDTFirst published on Sat 25 Apr 2026 11.31 EDT The Metropolitan police have launched investigations into hundreds of officers after using an AI tool built by the controversial tech company Palantir to root out rogue cops. The software was deployed by the Met over the course of a week, surveilling staff members using data the force has ready access to, unearthing rule-breaking ranging from work-from-home violations to suspected corruption and even criminal allegations such as rape. The Met said as a result of the software, evidence had been found tying a small number of officers to serious cases of misconduct and criminality, resulting in the arrest of three officers for offences including abuse of authority for sexual purposes, fraud, sexual assault, misconduct in public office and misuse of police systems.
Do you need to worry about Mythos, Anthropic's computer-hacking AI?
Do you need to worry about Mythos, Anthropic's computer-hacking AI? A powerful AI kept from public access because of its ability to hack computers with impunity is making headlines around the world. But what is Mythos, does it really represent a risk and might it even be used to improve cybersecurity? Anthropic's Project Glasswing aims to improve online security The past few weeks have brought apparently alarming news of Mythos, an AI that can identify cybersecurity flaws in a matter of moments, leaving operating systems and software vulnerable to hackers. The cybersecurity community is now beginning to get a better sense of how Mythos may change the face of cybersecurity - and not necessarily for the worse.
- Europe > United Kingdom > England > Surrey (0.05)
- Asia > Middle East > Iran (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Military (0.98)
Palantir Employees Are Starting to Wonder if They're the Bad Guys
Palantir Employees Are Starting to Wonder if They're the Bad Guys Interviews with current and former Palantir employees, along with internal Slack messages obtained by WIRED, suggest a workforce in turmoil. It took just a few months of President Donald Trump's second term for Palantir employees to question their company's commitments to civil liberties . Last fall, Palantir seemed to become the technological backbone of Trump's immigration enforcement machinery, providing software identifying, tracking, and helping deport immigrants on behalf of the Department of Homeland Security (DHS), when current and former employees started ringing the alarm. Right as they picked up the call, one of them asked, "Are you tracking Palantir's descent into fascism?" "That was their greeting," the other former employee says.
- North America > United States > California (0.16)
- Asia > Middle East > Iran (0.05)
- North America > United States > Washington > King County > Seattle (0.04)
- (5 more...)
Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox
Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox The Firefox team doesn't think emerging AI capabilities will upend cybersecurity long term, but they warn that software developers are likely in for a rocky transition. Amid a raging debate over the impact that new AI models will have on cybersecurity, Mozilla said on Tuesday that its Firefox 150 browser release this week includes protections for 271 vulnerabilities identified using early access to Anthropic's Mythos Preview . The Firefox team says that it has taken resources and discipline to adjust to the firehose of bugs that new AI tools can uncover, but that this big lift is necessary for the security of Mozilla's users, given that the capabilities will inevitably be in attackers' hands soon. Both Anthropic and OpenAI have announced new AI models in recent weeks that the companies say have advanced cybersecurity capabilities that could represent a turning point in how defenders--and, crucially, attackers--find vulnerabilities and misconfigurations in software systems. With this in mind, the companies have so far only done limited private releases of their new models, and both have also convened industry working groups meant to assess the advances and strategize.
- North America > United States > California (0.15)
- Asia > Middle East > Syria (0.15)
- North America > United States > Arizona (0.05)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.37)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.37)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.37)
Palantir Wants to Bring Back the Draft
Get your news from a source that's not owned and controlled by oligarchs. On Sunday afternoon, Palantir, the defense-tech company that sells software to clients like ICE, the US military, and the Israeli military, decided to give us all a piece of their mind. The company's official X account published a list of excerpts from co-founder Alex Karp's 2025 book The book frames Silicon Valley's move into military technology as the righteous repayment of a "moral debt" owed to the country that built the tech billionaire class. "The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation." If you read past the post and dig into the book itself, you'll find that this sentence continues: "the engineering elite must also, Karp said, participate in "the articulation of a national project--what is this country, what are our values, and for what do we stand." That is to say: Men like Karp should decide what this country is. "If a US Marine asks for a better rifle, we should build it; and the same goes for software," Palantir's Bill-Ackman-esque digression continued. It asserts that the future of American military dominance will not depend on nuclear deterrence, but on AI weaponry--possibly like the Palantir AI product that is reportedly used to help generate'kill lists' for the Israeli military in Gaza. Then, after arguing for the primacy of its own products--called " spy tech " by Palantir's critics--Karp suggests the remilitarization of the Axis Powers. "The postwar neutering of Germany and Japan must be undone," Karp's company account asserted. "The defanging of Germany was an overcorrection for which Europe is now paying a heavy price.
- North America > United States > California (0.46)
- Europe > Germany (0.46)
- Asia > Japan (0.25)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.25)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
The Hypocrisy at the Heart of the AI Industry
Tech companies believe in intellectual property, but not yours. In April 2024, Eric Schmidt, the former Google CEO and a current AI evangelist, gave a closed-door lecture to a group of Stanford students. If these young people hoped to be Silicon Valley entrepreneurs, Schmidt explained, then they should be prepared to breach some ethical boundaries. Yet Schmidt told the students to go ahead and download whatever they need to build an accurate "test" version of their AI product. If the product takes off, "then you hire a whole bunch of lawyers to go clean the mess up," he said.
- Information Technology (1.00)
- Law > Intellectual Property & Technology Law (0.94)