MIT Technology Review
DHS is using Google and Adobe AI to make videos
Immigration agencies have been flooding social media with bizarre, seemingly AI-generated content. We now know more about what might be making it. The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. It comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda--some of which appears to be made with AI--and as workers in tech have put pressure on their employers to denounce the agencies' activities. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. In a section about "editing images, videos or other public affairs materials using AI," it reveals for the first time that DHS is using Google's Veo 3 video generator and Adobe Firefly, estimating that the agency has between 100 and 1,000 licenses for the tools.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
The Download: inside the Vitalism movement, and why AI's "memory" is a privacy problem
The Download: inside the Vitalism movement, and why AI's "memory" is a privacy problem Meet the Vitalists: the hardcore longevity enthusiasts who believe death is "wrong" Last April, an excited crowd gathered at a compound in Berkeley, California, for a three-day event called the Vitalist Bay Summit. It was part of a longer, two-month residency that hosted various events to explore tools--from drug regulation to cryonics--that might be deployed in the fight against death. One of the main goals, though, was to spread the word of Vitalism, a somewhat radical movement established by Nathan Cheng and his colleague Adam Gries a few years ago. Consider it longevity for the most hardcore adherents--a sweeping mission to which nothing short of total devotion will do. Although interest in longevity has certainly taken off in recent years, not everyone in the broader longevity space shares Vitalists' commitment to actually making death obsolete. And the Vitalists feel that momentum is building, not just for the science of aging and the development of lifespan-extending therapies, but for the acceptance of their philosophy that .
- North America > United States > California > Alameda County > Berkeley (0.25)
- Asia > China (0.07)
- North America > United States > New York (0.05)
- (4 more...)
- Energy (0.71)
- Information Technology > Security & Privacy (0.49)
Roundtables: Why AI Companies Are Betting on Next-Gen Nuclear
AI is driving unprecedented investment for massive data centers and an energy supply that can support its huge computational appetite. One potential source of electricity for these facilities is next-generation nuclear power plants, which could be cheaper to construct and safer to operate than their predecessors. Watch a discussion with our editors and reporters on hyperscale AI data centers and next-gen nuclear--two featured technologies on the MIT Technology Review list . China figured out how to sell EVs. Now it has to deal with their aging batteries. Here are our picks for the advances to watch in the years ahead--and why we think they matter right now.
- North America > United States > Massachusetts (0.06)
- Asia > China > Beijing > Beijing (0.06)
What AI "remembers" about you is privacy's next frontier
What AI "remembers" about you is privacy's next frontier Agents' technical underpinnings create the potential for breaches that expose the entire mosaic of your life. The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents. Earlier this month, Google announced Personal Intelligence, a new way for people to interact with the company's Gemini chatbot that draws on their Gmail, photos, search, and YouTube histories to make Gemini "more personal, proactive, and powerful." It echoes similar moves by OpenAI, Anthropic, and Meta to add new ways for their AI products to remember and draw from people's personal details and preferences. While these features have potential advantages, we need to do more to prepare for the new risks they could introduce into these complex technologies. Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.91)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.35)
Rules fail at the prompt, succeed at the boundary
From the Gemini Calendar prompt-injection attack of 2026 to the September 2025 state-sponsored hack using Anthropic's Claude code as an automated intrusion engine, the coercion of human-in-the-loop agentic actions and fully autonomous agentic workflows are the new attack vector for hackers. In the Anthropic case, roughly 30 organizations across tech, finance, manufacturing, and government were affected. Anthropic's threat team assessed that the attackers used AI to carry out 80% to 90% of the operation: reconnaissance, exploit development, credential harvesting, lateral movement, and data exfiltration, with humans stepping in only at a handful of key decision points. This was not a lab demo; it was a live espionage campaign. The attackers hijacked an agentic setup (Claude code plus tools exposed via Model Context Protocol (MCP)) and jailbroke it by decomposing the attack into small, seemingly benign tasks and telling the model it was doing legitimate penetration testing. The same loop that powers developer copilots and internal agents was repurposed as an autonomous cyber-operator.
- North America > Canada (0.15)
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
The Download: A bid to treat blindness, and bridging the internet divide
Plus: TikTok won't be heading to court this week The first human test of a rejuvenation method will begin "shortly" Life Biosciences, a small Boston startup founded by Harvard professor and life-extension evangelist David Sinclair, has won FDA approval to proceed with the first targeted attempt at age reversal in human volunteers. The company plans to try to treat eye disease with a radical rejuvenation concept called "reprogramming" that has recently attracted hundreds of millions in investment for Silicon Valley firms like Altos Labs, New Limit, and Retro Biosciences, backed by many of the biggest names in tech. Today, an estimated 2.2 billion people still have either limited or no access to the internet, largely because they live in remote places. But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery. Although Google shuttered its high-profile internet balloon project Loon in 2021, work on other kinds of high-altitude platform stations has continued behind the scenes. Now, several companies claim they have solved Loon's problems--and are getting ready to prove the tech's internet beaming potential starting this year.
- North America > United States > California (0.35)
- Asia > China (0.07)
- North America > United States > Massachusetts (0.05)
- (4 more...)
- Research Report (0.56)
- Personal > Honors (0.49)
- Health & Medicine > Public Health (1.00)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (0.92)
- Government > Regional Government > North America Government > United States Government > FDA (0.89)
OpenAI's latest product lets you vibe code science
OpenAI's latest product lets you vibe code science Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers. OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. Kevin Weil, head of OpenAI for Science, pushes that analogy himself. "I think 2026 will be for AI and science what 2025 was for AI in software engineering," he said at a press briefing yesterday.
- North America > United States > Massachusetts (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > China (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.92)
Stratospheric internet could finally start taking off this year
High-altitude platforms could help connect over 2 billion people around the world who are still offline. Today, an estimated 2.2 billion people But that number could drop this year, thanks to tests of stratospheric airships, uncrewed aircraft, and other high-altitude platforms for internet delivery. Even with nearly 10,000 active Starlink satellites in orbit and the OneWeb constellation of 650 satellites, solid internet coverage is not a given across vast swathes of the planet. One of the most prominent efforts to plug the connectivity gap was Google X's Loon project . Launched in 2011, it aimed to deliver access using high-altitude balloons stationed above predetermined spots on Earth. But the project faced literal headwinds--the Loons kept drifting away and new ones had to be released constantly, making the venture economically unfeasible.
- Asia > Japan (0.07)
- Europe > United Kingdom > Scotland (0.05)
- Asia > Indonesia (0.05)
- (5 more...)
- Telecommunications (1.00)
- Information Technology (1.00)
- Transportation > Air (0.94)
- Aerospace & Defense > Aircraft (0.69)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Social Media (0.98)
The Download: OpenAI's plans for science, and chatbot age verification
In the three years since ChatGPT's explosive debut, OpenAI's technology has upended a remarkable range of everyday activities at home, at work, and in schools. Now OpenAI is making an explicit play for scientists. In October, the firm announced that it had launched a whole new team, called OpenAI for Science, dedicated to exploring how its large language models could help scientists and tweaking its tools to support them. How does a push into science fit with OpenAI's wider mission? And what exactly is the firm hoping to achieve? I put these questions to Kevin Weil, a vice president at OpenAI who leads the new OpenAI for Science team, in an exclusive interview.
- Asia > China (0.07)
- North America > United States > Oklahoma (0.05)
- North America > United States > New York (0.05)
- (4 more...)
- Energy (1.00)
- Media (0.71)
- Government (0.71)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Inside OpenAI's big play for science
An exclusive conversation with Kevin Weil, head of OpenAI for Science, a new in-house team that wants to make scientists more productive. In the three years since ChatGPT's explosive debut, OpenAI's technology has upended a remarkable range of everyday activities at home, at work, in schools--anywhere people have a browser open or a phone out, which is everywhere. Now OpenAI is making an explicit play for scientists. In October, the firm announced that it had launched a whole new team, called OpenAI for Science, dedicated to exploring how its large language models could help scientists and tweaking its tools to support them. The last couple of months have seen a slew of social media posts and academic publications in which mathematicians, physicists, biologists, and others have described how LLMs (and OpenAI's GPT-5 in particular) have helped them make a discovery or nudged them toward a solution they might otherwise have missed. In part, OpenAI for Science was set up to engage with this community.
- North America > United States > Massachusetts (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > China (0.04)
- Education (0.48)
- Health & Medicine (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)