Goto

Collaborating Authors

 ai model


AI voice chat sucks. This startup thinks it's cracked it

PCWorld

PCWorld reports that Thinking Machines, founded by ex-OpenAI executive Mira Murati, has developed new AI voice interaction models that enable real-time conversations with interruptions and visual cue recognition. The technology uses a dual-AI system with a fast interaction model and background model for complex tasks, employing a multi-stream, micro-turn approach. This advancement could transform AI voice chat from current CB radio-style turn-taking into natural human-like conversations, though the technology remains in research phase. Voice chatting with today's AI can feel as stilted as an old-school CB radio exchange, where you're forced to take turns as you talk. "Hey ChatGPT, let's talk about the movies!


Google announces its first-ever discovery of a zero-day exploit made with AI

Engadget

We can now add cybercrimes to the list of growing concerns associated with artificial intelligence. Google's Threat Intelligence Group (GTIG) said it discovered, for the first time ever, a threat actor using a zero-day exploit that it believes was developed by AI. Zero-day vulnerabilities are often the most dangerous since they're unknown to the targets, leaving them with zero days to prepare for the attack. Google said in the report the threat actor was planning to use it in a mass exploitation event, but its proactive discovery may have prevented its use. Google added that it doesn't believe its own Gemini models were used, but still has high confidence an AI model was part of discovering the vulnerability and weaponizing an exploit.


How to Disable Google's Gemini in Chrome

WIRED

Chrome users were caught off guard by a 4-GB Google AI model baked into Chrome, sparking privacy concerns. You might not want to. If you use Google's Chrome browser for desktop, there's probably a Gemini Nano AI model running on your computer right now and taking up about 4 GB of space. That's not necessarily a bad thing, but if you didn't know about it and don't want it, there's a way to turn it off. The file started auto-downloading for Chrome users in 2024 after Google built Gemini Nano into the browser.


Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

WIRED

New research suggests that reliance on AI assistants can have a negative impact on people's ability to think and problem solve. Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people's ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously.


Hackers Hate AI Slop Even More Than You Do

WIRED

Hackers and other cybercriminals are complaining about "AI shit" flooding platforms where they discuss cyberattacks and other illegal activity. "I'm disappointed that you are working to incorporate AI garbage into the site," one annoyed person, posting anonymously, said in an online message. "No-one is asking for this--we want you to improve the site, stop charging for new features." Only, this is not a regular internet user moaning about AI being forced into their favorite app . Instead, they are complaining about a cybercrime forum's plans to introduce more generative AI.


US to safety test new AI models from Google, Microsoft, xAI

BBC News

New artificial intelligence (AI) tools and capabilities from Google, Microsoft and xAI will now be tested by the US Department of Commerce before they are released to the public. The tech firms have agreed to voluntarily submit their models for testing through Commerce's Center for AI Standards and Innovation (CAISI). The new pacts are an expansion on agreements by AI companies like OpenAI and Anthropic that were reached during the Biden Administration, and will see AI models from all of the companies evaluated for their capabilities and security. These expanded industry collaborations help us scale our work in the public interest at a critical moment, CAISI's director Chris Fall said. Overall, the evaluations of the AI tools will cover testing, collaborative research and best practice development related to commercial AI systems.


Backlash builds over NHS plan to hide source code from AI hacking risk

New Scientist

NHS England is pulling its open-source software from the internet because of fears around computer-hacking AI models like Mythos. A decision by NHS England to withdraw open-source code created with UK taxpayer funds because of the risk posed by computer-hacking AI models is attracting growing backlash. Last month, Mythos, an AI created by technology firm Anthropic, was widely reported to be capable of discovering flaws in virtually any software, potentially allowing hackers to break into systems running it. NHS England has now told staff that existing and future software must be pulled from public view and kept behind closed doors by 11 May because of this risk. The decision goes against the NHS service standard, which requires that staff make any software they produce open-source so that tools can be built upon, improved and used without the need for duplicated effort.


Disneyland Now Uses Face Recognition on Visitors

WIRED

Plus: The NSA tests Anthropic's Mythos Preview to find vulnerabilities, a Finnish teen is charged over the Scattered Spider hacking spree, and more. A gunman attempted to enter the White House Correspondents' Dinner in Washington, DC, last weekend, while President Donald Trump, Vice President JD Vance, and other administration officials were in attendance. Media reports and Trump himself quickly identified the suspected shooter as 31-year-old engineer and computer scientist Cole Tomas Allen. The California resident was arrested at the scene on Saturday and appeared Monday in the US District Court for the District of Columbia to face three federal charges: attempting to assassinate the president, transportation of a firearm in interstate commerce, and discharge of a firearm during a crime of violence. The authentication standards body known as the FIDO Alliance announced working groups this week along with Google and Mastercard to develop technical guardrails for validating and protecting transactions initiated by an AI agent .


Pentagon says US military to be an 'AI-first' fighting force

BBC News

Pentagon says US military to be an'AI-first' fighting force The US military plans to increase its use of artificial intelligence (AI) further after the Pentagon agreed to new and expanded contracts with some of the biggest names in technology. Under eight agreements with Google, OpenAI, Amazon, Microsoft, SpaceX, Oracle, Nvidia and the start-up Reflection, the Pentagon said AI technology would now be used for any lawful operational use. These agreements accelerate the transformation [of] the US military as an AI-first fighting force, the Pentagon said. Conspicuous by its absence is Anthropic, as the company has said it is concerned about how the Pentagon could use its tools in warfare and domestically. The firm is now suing the government over the alleged retaliation it faced after refusing to accept any lawful use language in its own contract.


Dangerous New Linux Exploit Gives Attackers Root Access to Countless Computers

WIRED

The exploit, dubbed CopyFail and tracked as CVE-2026-31431, allows hackers to take over PCs and data center servers. The Linux vulnerabilities have been patched--but many machines remain at risk. Publicly released exploit code for an effectively unpatched vulnerability that gives root access to virtually all releases of Linux is setting off alarm bells as defenders scramble to ward off severe compromises inside data centers and on personal devices. The vulnerability and exploit code that exploits it were released Wednesday evening by researchers from security firm Theori, five weeks after privately disclosing it to the Linux kernel security team. The critical flaw, tracked as CVE-2026-31431 and the name CopyFail, is a local privilege escalation, a vulnerability class that allows unprivileged users to elevate themselves to administrators.