Google made an AI model to talk to dolphins

Popular Science

A new large language model AI system may soon allow humans to converse with dolphins. Scheduled to debut in the coming months, researchers will test to see if DolphinGemma and its companion Cetacean Hearing Augmentation Telemetry (CHAT) system can translate and mimic some of the mammal's own complex vocalizations. If successful, the breakthrough may represent the culmination of over four decades' worth of work, documentation, and conservation efforts.. Dolphins are some of the Earth's smartest and most communicative animals. Their social interactions are so complex that researchers at the Wild Dolphin Project (WDP) have spent the last 40 years attempting to decipher them. In the process, WDP has amassed decades' worth of underwater audio and video documenting a single community of Atlantic spotted dolphins in the Bahamas.


Microsoft's Recall AI Tool Is Making an Unwelcome Return

WIRED

Security and privacy advocates are girding themselves for another uphill battle against Recall, the AI tool rolling out in Windows 11 that will screenshot, index, and store everything a user does every three seconds. This story originally appeared on Ars Technica, a trusted source for technology news, tech policy analysis, reviews, and more. Ars is owned by WIRED's parent company, Condรฉ Nast. When Recall was introduced in May 2024, security practitioners roundly castigated it for creating a gold mine for malicious insiders, criminals, or nation-state spies if they managed to gain even brief administrative access to a Windows device. Privacy advocates warned that Recall was ripe for abuse in intimate partner violence settings.


It's a private cloud revival: Why Kubernetes and cloud-native tech are essential in the AI age

ZDNet

I have to admit, heading out to London for 2025 KubeCon CloudNativeCon Europe, I thought I might see the beginning of the downward trend for the event about building, deploying, and managing next-generation cloud applications and infrastructures. After all, the show turned 10 last year, and, in my experience, that's when conferences start to show their age. Plus, there has been lots of news around the effect of AI on application development, and while KubeCon isn't directly about dev, much of its focus is on applications and services. But boy, was I wrong. In fact, KubeCon 2025 in London was packed, with over 12,000 attendees.


Google is talking to dolphins using Pixel phones and AI - and the video is delightful

ZDNet

Dolphins are among the smartest creatures on the planet, and a new AI model from Google combined with Pixel phones is helping researchers better understand their language -- and even hopefully communicate with them. Dolphin sounds fall into a few specific categories, such as whistles, squawks, and clicking buzzes, each linked to a different context and behavior. By analyzing these sounds, researchers can detect patterns and structure, just like human language. Researchers at the Wild Dolphin Project have been collecting data on this language for nearly 40 years. A collaboration with Google to use a new Google AI model called DolphinGemma lets them take that research a step further and actually predict what sound is coming next.


OpenAI is phasing out GPT-4.5 for developers

Engadget

OpenAI has announced its phasing out GPT-4.5 from its developer API in favor of its new GPT-4.1 model. When it launched, OpenAI described GPT-4.5 as its best and most capable model so far, in part because it was a more natural conversationalist and could capably mimic some notion of emotional intelligence. Despite what its name suggests, GPT-4.1 is supposed to be better and more efficient. That means that if you won't find it as in option in the public-facing ChatGPT interface, but you could someday interact with an agent that leverages the model's improvements. GPT-4.1 is supposed to be better at coding and "long context understanding," according to OpenAI, with support for "up to one million tokens of context" and knowledge of the world up to June 2024.


Unboxing The New Nintendo Switch

FOX News

Fox on Games takes an insider look at all the new features of the soon-to-be-released Switch 2. New York City โ€“ In a bustling event on 5th Avenue that drew over 100 journalists, content creators, and industry insiders, Nintendo made waves by unveiling the much-anticipated Switch 2. The console is set to redefine gaming experiences worldwide, and Fox was there to capture the excitement. The Switch 2 is priced at 450 for the console alone, with an option to purchase a bundled package at 500, which includes the new racer "pack-in" game, Mario Kart World. Nintendo has listened to feedback from the initial Switch release and introduced significant upgrades to the Switch 2's hardware. The Joy-Con controllers now connect via magnets, eliminating the traditional track system. Enhanced with a gyroscope and a new mouse mechanic, players can simulate movement with precision, akin to rolling a ball across the floor.


Nvidia commits to 500bn AI server production in the US

Al Jazeera

Chipmaker Nvidia says it plans to build artificial intelligence servers worth as much as 500bn in the United States over the next four years with help from partners such as TSMC. Nvidia is the latest US tech firm to back a push by President Donald Trump's administration for local manufacturing. Monday's announcement includes the production of its Blackwell AI chips at TSMC's factory in Phoenix, Arizona, and supercomputer manufacturing plants in Texas by Foxconn and Wistron, which are expected to ramp up in 12 to 15 months. "Adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthens our supply chain and boosts our resiliency," Nvidia CEO Jensen Huang said. "Manufacturing AI chips and supercomputers in the US will create hundreds of thousands of jobs in the coming decades," Nvidia said in a statement.


Meta will start using data from EU users to train its AI models

Engadget

Meta plans to start using data collected from its users in the European Union to train its AI systems, the company announced today. Starting this week, the tech giant will begin notifying Europeans through email and its family of apps of the fact, with the message set to include an explanation of the kind of data it plans to use as part of the training. Additionally, the notification will link out to a form users can complete to opt out of the process. "We have made this objection form easy to find, read, and use, and we'll honor all objection forms we have already received, as well as newly submitted ones," says Meta. The company notes it will only use data it collects from public posts and Meta AI interactions for training purposes.


OpenAI's New GPT 4.1 Models Excel at Coding

WIRED

OpenAI announced today that it is releasing a new family of artificial intelligence models optimized to excel at coding, as it ramps up efforts to fend off increasingly stiff competition from companies like Google and Anthropic. The models are available to developers through OpenAI's application programming interface (API). OpenAI is releasing three sizes of models: GPT 4.1, GPT 4.1 Mini, and GPT 4.1 Nano. Kevin Weil, chief product officer at OpenAI, said on a livestream that the new models are better than OpenAI's most widely used model, GPT-4o, and better than its largest and most powerful model, GPT-4.5, in some ways. GPT-4.1 scored 55 percent on SWE-Bench, a widely used benchmark for gauging the prowess of coding models.


GPT-4.1 is here, but not for everyone. Here's who can try the new models

ZDNet

Last week, OpenAI CEO Sam Altman teased that he was dropping a new feature. Paired with reports and spottings of new model art, many speculated it was the long-awaited release of the GPT-4.1 model. It turned out to be a massive ChatGPT update that introduced new memory capabilities -- but now, OpenAI's new family of models has finally arrived. On Monday via a livestream, OpenAI unveiled a new family of models: GPT-4.1, According to OpenAI, the family of models offers improvements in coding, instruction-following, and long-context understanding, and outperforms GPT-4o and GPT-4o mini "across the board."