Plotting

 ZDNet


How to try Veo 3, Google's AI video generator that's going viral on the internet

ZDNet

AI-generated video has been advancing rapidly, with leading tech developers racing to build and commercialize their own models. We're now seeing the rise of tools that can generate strikingly photorealistic video from a single prompt in natural language. For the most part, however, AI-generated video has had a glaring shortcoming: it's silent. At its annual I/O developer conference on Tuesday, Google announced the release of Veo 3, the latest iteration of its video-generating AI model, which also comes with the ability to generate synchronized audio. Imagine you prompt the system to generate a video set inside a busy subway car, for example.


Google made an AI content detector - join the waitlist to try it

ZDNet

Fierce competition among some of the world's biggest tech companies has led to a profusion of AI tools that can generate humanlike prose and uncannily realistic images, audio, and video. While those companies promise productivity gains and an AI-powered creativity revolution, fears have also started to swirl around the possibility of an internet that's so thoroughly muddled by AI-generated content and misinformation that it's impossible to tell the real from the fake. Many leading AI developers have, in response, ramped up their efforts to promote AI transparency and detectability. Most recently, Google announced the launch of its SynthID Detector, a platform that can quickly spot AI-generated content created by one of the company's generative models: Gemini, Imagen, Lyria, and Veo. Originally released in 2023, SynthID is a technology that embeds invisible watermarks -- a kind of digital fingerprint that can be detected by machines but not by the human eye -- into AI-generated images.


Anthropic's latest Claude AI models are here - and you can try one for free today

ZDNet

Since its founding in 2021, Anthropic has quickly become one of the leading AI companies and a worthy competitor to OpenAI, Google, and Microsoft with its Claude models. Building on this momentum, the company held its first developer conference, Thursday, -- Code with Claude -- which showcased what the company has done so far and where it is going next. Also: I let Google's Jules AI agent into my code repo and it did four hours of work in an instant Anthropic used the event stage to unveil two highly anticipated models, Claude Opus 4 and Claude Sonnet 4. Both offer improvements over their preceding models, including better performance in coding and reasoning. Beyond that, the company launched new features and tools for its models that should improve the user experience. Keep reading to learn more about the new models.


I let Google's Jules AI agent into my code repo and it did four hours of work in an instant

ZDNet

I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell (or both). Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. All of these coding agents will perform coding operations on a GitHub repository.


OpenAI goes all in on hardware, will buy Jony Ive's AI startup

ZDNet

OpenAI is officially getting into the hardware business. In a video posted to X on Wednesday, OpenAI CEO Sam Altman and former Apple designer Jony Ive, who worked on flagship products like the iPhone, revealed a partnership to create the next generation of AI-enabled devices. Also: I tried Google's XR glasses and they already beat my Meta Ray-Bans in 3 ways The AI software company announced it is merging with io, an under-the-radar startup focused on AI devices that Ive founded a year ago alongside several partners. In the video, Altman and Ive say they have been "quietly" collaborating for two years. As part of the deal, Ive and those at his design firm, LoveFrom, will remain independent but will take on creative roles at OpenAI.


Dell wants to be your one-stop shop for AI infrastructure

ZDNet

Michael Dell is pitching a "decentralized" future for artificial intelligence that his company's devices will make possible. "The future of AI will be decentralized, low-latency, and hyper-efficient," predicted the Dell Technologies founder, chairman, and CEO in his Dell World keynote, which you can watch on YouTube. "AI will follow the data, not the other way around," Dell said at Monday's kickoff of the company's four-day customer conference in Las Vegas. Dell is betting that the complexity of deploying generative AI on-premise is driving companies to embrace a vendor with all of the parts, plus 24-hour-a-day service and support, including monitoring. On day two of the show, Dell chief operating officer Jeffrey Clarke noted that Dell's survey of enterprise customers shows 37% want an infrastructure vendor to "build their entire AI stack for them," adding, "We think Dell is becoming an enterprise's'one-stop shop' for all AI infrastructure."


Google releases its asynchronous Jules AI agent for coding - how to try it for free

ZDNet

The race to deploy AI agents is heating up. At its annual I/O developer conference yesterday, Google announced that Jules, its new AI coding assistant, is now available worldwide in public beta. The launch marks the company's latest effort to corner the burgeoning market for AI agents, widely regarded across Silicon Valley as essentially a more practical and profitable form of chatbot. Virtually every other major tech giant -- including Meta, OpenAI, and Amazon, just to name a few -- has launched its own agent product in recent months. Also: I tested ChatGPT's Deep Research against Gemini, Perplexity, and Grok AI to see which is best Originally unveiled by Google Labs in December, Jules is positioned as a reliable, automated coding assistant that can manage a broad suite of time-consuming tasks on behalf of human users. The model is "asynchronous," which, in programming-speak, means it can start and work on tasks without having to wait for any single one of them to finish.


I tried Google's XR headset, and it already beats the Apple Vision Pro in 3 ways

ZDNet

Putting on Project Moohan, an upcoming XR headset developed by Google, Samsung, and Qualcomm, for the first time felt strangely familiar. From twisting the head-strap knob on the back to slipping the standalone battery pack into my pants pocket, my mind was transported back to February 2024, when I tried on the Apple Vision Pro on launch day. Also: The best smart glasses unveiled at I/O 2025 weren't made by Google Only this time, the headset was powered by Android XR, Google's newest operating system built around Gemini, the same AI model that dominated the Google I/O headlines this week. The difference in software was immediately noticeable -- from the home grid of Google apps like Photos, Maps, and YouTube (which VisionOS still lacks) to prompting for Gemini instead of Siri with a long press of the headset's multifunctional key. While my demo with Project Moohan lasted only about 10 minutes, it gave me a clear understanding of how it's challenging Apple's Vision Pro and how Google, Samsung, and Qualcomm plan to convince the masses that the future of spatial computing does, in fact, live in a bulkier space-helmet-like device. For starters, there's no denying that the industrial designers of Project Moohan drew some inspiration from the Apple Vision Pro.


Google made it clear at I/O that AI will soon be inescapable

ZDNet

Unsurprisingly, the bulk of Google's announcements at I/O this week focused on AI. Although past Google I/O events also heavily leaned on AI, what made this year's announcements different is that the features were spread across nearly every Google offering and touched nearly every task people partake in every day. Because I'm an AI optimist, and my job as an AI editor involves testing tools, I have always been pretty open to using AI to optimize my daily tasks. However, Google's keynote made it clear that even those who may not be as open to it will soon find it unavoidable. Moreover, the tech giants' announcements shed light on the industry's future, revealing three major trends about where AI is headed, which you can read more about below.


I tried Google's XR glasses and they already beat my Meta Ray-Bans in 3 ways

ZDNet

Google unveiled a slew of new AI tools and features at I/O, dropping the term Gemini 95 times and AI 92 times. However, the best announcement of the entire show wasn't an AI feature; rather, the title went to one of the two hardware products announced -- the Android XR glasses. Also: I'm an AI expert, and these 8 announcements at Google I/O impressed me the most For the first time, Google gave the public a look at its long-awaited smart glasses, which pack Gemini's assistance, in-lens displays, speakers, cameras, and mics into the form factor of traditional eyeglasses. I had the opportunity to wear them for five minutes, during which I ran through a demo of using them to get visual Gemini assistance, take photos, and get navigation directions. As a Meta Ray-Bans user, I couldn't help but notice the similarities and differences between the two smart glasses -- and the features I now wish my Meta pair had.