Goto

Collaborating Authors

 ZDNet


Dell wants to be your one-stop shop for AI infrastructure

ZDNet

Michael Dell is pitching a "decentralized" future for artificial intelligence that his company's devices will make possible. "The future of AI will be decentralized, low-latency, and hyper-efficient," predicted the Dell Technologies founder, chairman, and CEO in his Dell World keynote, which you can watch on YouTube. "AI will follow the data, not the other way around," Dell said at Monday's kickoff of the company's four-day customer conference in Las Vegas. Dell is betting that the complexity of deploying generative AI on-premise is driving companies to embrace a vendor with all of the parts, plus 24-hour-a-day service and support, including monitoring. On day two of the show, Dell chief operating officer Jeffrey Clarke noted that Dell's survey of enterprise customers shows 37% want an infrastructure vendor to "build their entire AI stack for them," adding, "We think Dell is becoming an enterprise's'one-stop shop' for all AI infrastructure."


Google releases its asynchronous Jules AI agent for coding - how to try it for free

ZDNet

The race to deploy AI agents is heating up. At its annual I/O developer conference yesterday, Google announced that Jules, its new AI coding assistant, is now available worldwide in public beta. The launch marks the company's latest effort to corner the burgeoning market for AI agents, widely regarded across Silicon Valley as essentially a more practical and profitable form of chatbot. Virtually every other major tech giant -- including Meta, OpenAI, and Amazon, just to name a few -- has launched its own agent product in recent months. Also: I tested ChatGPT's Deep Research against Gemini, Perplexity, and Grok AI to see which is best Originally unveiled by Google Labs in December, Jules is positioned as a reliable, automated coding assistant that can manage a broad suite of time-consuming tasks on behalf of human users. The model is "asynchronous," which, in programming-speak, means it can start and work on tasks without having to wait for any single one of them to finish.


I tried Google's XR headset, and it already beats the Apple Vision Pro in 3 ways

ZDNet

Putting on Project Moohan, an upcoming XR headset developed by Google, Samsung, and Qualcomm, for the first time felt strangely familiar. From twisting the head-strap knob on the back to slipping the standalone battery pack into my pants pocket, my mind was transported back to February 2024, when I tried on the Apple Vision Pro on launch day. Also: The best smart glasses unveiled at I/O 2025 weren't made by Google Only this time, the headset was powered by Android XR, Google's newest operating system built around Gemini, the same AI model that dominated the Google I/O headlines this week. The difference in software was immediately noticeable -- from the home grid of Google apps like Photos, Maps, and YouTube (which VisionOS still lacks) to prompting for Gemini instead of Siri with a long press of the headset's multifunctional key. While my demo with Project Moohan lasted only about 10 minutes, it gave me a clear understanding of how it's challenging Apple's Vision Pro and how Google, Samsung, and Qualcomm plan to convince the masses that the future of spatial computing does, in fact, live in a bulkier space-helmet-like device. For starters, there's no denying that the industrial designers of Project Moohan drew some inspiration from the Apple Vision Pro.


Google made it clear at I/O that AI will soon be inescapable

ZDNet

Unsurprisingly, the bulk of Google's announcements at I/O this week focused on AI. Although past Google I/O events also heavily leaned on AI, what made this year's announcements different is that the features were spread across nearly every Google offering and touched nearly every task people partake in every day. Because I'm an AI optimist, and my job as an AI editor involves testing tools, I have always been pretty open to using AI to optimize my daily tasks. However, Google's keynote made it clear that even those who may not be as open to it will soon find it unavoidable. Moreover, the tech giants' announcements shed light on the industry's future, revealing three major trends about where AI is headed, which you can read more about below.


I tried Google's XR glasses and they already beat my Meta Ray-Bans in 3 ways

ZDNet

Google unveiled a slew of new AI tools and features at I/O, dropping the term Gemini 95 times and AI 92 times. However, the best announcement of the entire show wasn't an AI feature; rather, the title went to one of the two hardware products announced -- the Android XR glasses. Also: I'm an AI expert, and these 8 announcements at Google I/O impressed me the most For the first time, Google gave the public a look at its long-awaited smart glasses, which pack Gemini's assistance, in-lens displays, speakers, cameras, and mics into the form factor of traditional eyeglasses. I had the opportunity to wear them for five minutes, during which I ran through a demo of using them to get visual Gemini assistance, take photos, and get navigation directions. As a Meta Ray-Bans user, I couldn't help but notice the similarities and differences between the two smart glasses -- and the features I now wish my Meta pair had.


I'm an AI expert, and these 8 announcements at Google I/O impressed me the most

ZDNet

The past two Google I/O developer conferences have mainly been AI events, and this year is no different. The tech giant used the stage to unveil features across all its most popular products, even bringing AI experiments that were previously announced to fruition. This means that dozens of AI features and tools were unveiled. They're meant to transform how you use Google offerings, including how you shop, video call, sort your inbox, search the web, create images, edit video, code, and more. Since such a firehose of information is packed into a two-hour keynote address, you may be wondering which features are actually worth paying attention to.


Google's new AI shopping tool just changed the way we shop online - here's why

ZDNet

In recent years, Google Search's shopping features have evolved to make Search a one-stop shop for consumers searching for specific products, deals, and retailers. Shoppers on a budget can scour Search's Shopping tab during major sale events to see which retailer offered the best deal and where. But often, consumers miss out on a product's most productive discount, paying more later because they don't want to wait again. During this year's Google I/O developer conference, Google aims to solve this problem with AI. Shopping in Google's new AI Mode integrates Gemini's capabilities into Google's existing online shopping features, allowing consumers to use conversational phrases to find the perfect product.


I tried Samsung's Project Moohan XR headset at I/O 2025 - and couldn't help but smile

ZDNet

Putting on Project Moohan, an upcoming XR headset developed by Google, Samsung, and Qualcomm, for the first time felt strangely familiar. From twisting the head strap knob on the back to slipping the standalone battery pack into my pants pocket, my mind was transported back to February of 2024, when I tried on the Apple Vision Pro during launch day. Also: Xreal's Project Aura are the Google smart glasses we've all been waiting for Only this time, the headset was powered by Android XR, Google's newest operating system built around Gemini, the same AI model that dominated the Google I/O headlines this week. The difference in software was immediately noticeable, from the starting home grid of Google apps like Photos, Maps, and YouTube (which VisionOS still lacks) to prompting for Gemini instead of Siri with a long press of the headset's multifunctional key. While my demo with Project Moohan lasted only about ten minutes, it gave me a fundamental understanding of how it's challenging Apple's Vision Pro and how Google, Samsung, and Qualcomm plan to convince the masses that the future of spatial computing does, in fact, live in a bulkier, space helmet-like device. For starters, there's no denying that the industrial designers of Project Moohan drew some inspiration from the Apple Vision Pro.


Is Google's 250-per-month AI subscription plan worth it? Here's what's included

ZDNet

If you're one of the 8% of Americans who say they're willing to pay for AI, Google has a deal for you -- a 250 per month AI subscription. The company unveiled Google AI Ultra today, a plan with the biggest usage limits for Google's suite of AI tools and access to the highest versions of those tools. Google AI Ultra is intended for filmmakers, developers, and creative professionals and gives users access to tools like Veo, Imagen, Whisk, NotebookLM, and a new tool called Flow. Also: Google's popular AI tool gets its own Android app - how to use NotebookLM on your phone Subscribers also get a massive expansion in storage across Google platforms, plus YouTube Premium ( 13.99 per month on its own). Here's a full breakdown of what the new plan includes: Google said the current AI Premium plan is also getting an upgrade -- to Gemini AP Pro.


Google just gave Gmail a major AI upgrade, and it solves a big problem for me

ZDNet

The Google I/O keynote took place on Tuesday, and the company took the stage to unveil new features across all of its product offerings. Of course, this included AI upgrades to the Google Workspace suite of applications, which millions of users rely on every day to get their work done, including Google Docs, Meet, Slides, Gmail, and Vids. Also: Google's popular AI tool gets its own Android app - how to use NotebookLM on your phone The features unveiled this year focused on practicality. They embed AI features into the Google apps you already use every day to speed up your daily workflow by performing tedious and time-consuming tasks, such as cleaning out your inbox. Everyone can relate to being bombarded with emails.