MIT Technology Review
AI's energy impact is still small--but how we handle it is huge
Innovation in IT got us to this point. Graphics processing units (GPUs) that power the computing behind AI have fallen in cost by 99% since 2006. There was similar concern about the energy use of data centers in the early 2010s, with wild projections of growth in electricity demand. But gains in computing power and energy efficiency not only proved these projections wrong but enabled a 550% increase in global computing capability from 2010 to 2018 with only minimal increases in energy use. In the late 2010s, however, the trends that had saved us began to break.
We did the math on AI's energy footprint. Here's the story you haven't heard.
AI's integration into our lives is the most significant shift in online life in more than a decade. Hundreds of millions of people now regularly turn to chatbots for help with homework, research, coding, or to create images and videos. Today, new analysis by MIT Technology Review provides an unprecedented and comprehensive look at how much energy the AI industry uses--down to a single query--to trace where its carbon footprint stands now, and where it's headed, as AI barrels towards billions of daily users. This story is a part of MIT Technology Review's series "Power Hungry: AI and our energy future," on the energy demands and carbon costs of the artificial-intelligence revolution. We spoke to two dozen experts measuring AI's energy demands, evaluated different AI models and prompts, pored over hundreds of pages of projections and reports, and questioned top AI model makers about their plans.
Four reasons to be optimistic about AI's energy usage
"Dollars are being invested, GPUs are being burned, water is being evaporated--it's just absolutely the wrong direction," says Ali Farhadi, CEO of the Seattle-based nonprofit Allen Institute for AI. But sift through the talk of rocketing costs--and climate impact--and you'll find reasons to be hopeful. There are innovations underway that could improve the efficiency of the software behind AI models, the computer chips those models run on, and the data centers where those chips hum around the clock. Here's what you need to know about how energy use, and therefore carbon emissions, could be cut across all three of those domains, plus an added argument for cautious optimism: There are reasons to believe that the underlying business realities will ultimately bend toward more energy-efficient AI. The most obvious place to start is with the models themselves--the way they're created and the way they're run.
AI can do a better job of persuading people than we do
Their findings are the latest in a growing body of research demonstrating LLMs' powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they're interacting with. The research has been published in the journal Nature Human Behavior. "Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction," says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project. "These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time," he says.
The Download: Montana's experimental treatments, and Google DeepMind's new AI agent
The news: A bill that allows clinics to sell unproven treatments has been passed in Montana. Under the legislation, doctors can apply for a license to open an experimental treatment clinic and recommend and sell therapies not approved by the Food and Drug Administration (FDA) to their patients. Why it matters: Once it's signed by the governor, the law will be the most expansive in the country in allowing access to drugs that have not been fully tested. The bill allows for any drug produced in the state to be sold in it, providing it has been through phase I clinical trials--but these trials do not determine if the drug is effective. The big picture: The bill was drafted and lobbied for by people interested in extending human lifespans.
Google DeepMind's new AI agent uses large language models to crack real-world problems
"You can see it as a sort of super coding agent," says Pushmeet Kohli, a vice president at Google DeepMind who leads its AI for Science teams. "It doesn't just propose a piece of code or an edit, it actually produces a result that maybe nobody was aware of." In particular, AlphaEvolve came up with a way to improve the software Google uses to allocate jobs to its many millions of servers around the world. Google DeepMind claims the company has been using this new software across all of its data centers for more than a year, freeing up 0.7% of Google's total computing resources. That might not sound like much, but at Google's scale it's huge.
Police tech can sidestep facial recognition bans now
Companies like Flock and Axon sell suites of sensors--cameras, license plate readers, gunshot detectors, drones--and then offer AI tools to make sense of that ocean of data (at last year's conference I saw schmoozing between countless AI-for-police startups and the chiefs they sell to on the expo floor). Departments say these technologies save time, ease officer shortages, and help cut down on response times. Those sound like fine goals, but this pace of adoption raises an obvious question: Who makes the rules here? When does the use of AI cross over from efficiency into surveillance, and what type of transparency is owed to the public? In some cases, AI-powered police tech is already driving a wedge between departments and the communities they serve.
The Download: a new form of AI surveillance, and the US and China's tariff deal
Police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people based on attributes like body size, gender, hair color and style, clothing, and accessories. The tool, called Track and built by the video analytics company Veritone, is used by 400 customers, including state and local police departments and universities all over the US. It is also expanding federally. The product has drawn criticism from the American Civil Liberties Union, which--after learning of the tool through MIT Technology Review--said it was the first instance they'd seen of a nonbiometric tracking system used at scale in the US. How the largest gathering of US police chiefs is talking about AI.
How a new type of AI is helping police skirt facial recognition bans
"The whole vision behind Track in the first place," says Veritone CEO Ryan Steelberg, was "if we're not allowed to track people's faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?" In addition to tracking individuals where facial recognition isn't legally allowed, Steelberg says, it allows for tracking when faces are obscured or not visible. The product has drawn criticism from the American Civil Liberties Union, which--after learning of the tool through MIT Technology Review--said it was the first instance they'd seen of a nonbiometric tracking system used at scale in the US. They warned that it raises many of the same privacy concerns as facial recognition but also introduces new ones at a time when the Trump administration is pushing federal agencies to ramp up monitoring of protesters, immigrants, and students. Veritone gave us a demonstration of Track in which it analyzed people in footage from different environments, ranging from the January 6 riots to subway stations.
A new AI translation system for headphones clones multiple voices simultaneously
"There are so many smart people across the world, and the language barrier prevents them from having the confidence to communicate," says Shyam Gollakota, a professor at the University of Washington, who worked on the project. "My mom has such incredible ideas when she's speaking in Telugu, but it's so hard for her to communicate with people in the US when she visits from India. We think this kind of system could be transformative for people like her." While there are plenty of other live AI translation systems out there, such as the one running on Meta's Ray-Ban smart glasses, they focus on a single speaker, not multiple people speaking at once, and deliver robotic-sounding automated translations. The new system is designed to work with existing, off-the shelf noise-canceling headphones that have microphones, plugged into a laptop powered by Apple's M2 silicon chip, which can support neural networks.