MIT Technology Review
The Download: how the military is using AI, and AI's climate promises
For much of last year, US Marines conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia were also running an experiment. The service members in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding. Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence--nonclassified articles, reports, images, videos--collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Though the US military has been developing computer vision models and similar AI tools since 2017, the use of generative AI--tools that can engage in human-like conversation--represent a newer frontier. The International Energy Agency states in a new report that AI could eventually reduce greenhouse-gas emissions, possibly by much more than the boom in energy-guzzling data center development pushes them up.
Generative AI is learning to spy for the US military
"We still need to validate the sources," says Lowdon. But the unit's commanders encouraged the use of large language models, he says, "because they provide a lot more efficiency during a dynamic situation." The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to 99 million by the Pentagon's startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military's embrace of artificial intelligence--not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI--tools that can engage in human-like conversation like those built by Vannevar Labs--represent a newer frontier.
How AI is interacting with our creative human processes
The rapid proliferation of AI in our lives introduces new challenges around authorship, authenticity, and ethics in work and art. But it also offers a particularly human problem in narrative: How can we make sense of these machines, not just use them? And how do the words we choose and stories we tell about technology affect the role we allow it to take on (or even take over) in our creative lives? Both Vara's book and The Uncanny Muse, a collection of essays on the history of art and automation by the music critic David Hajdu, explore how humans have historically and personally wrestled with the ways in which machines relate to our own bodies, brains, and creativity. At the same time, The Mind Electric, a new book by a neurologist, Pria Anand, reminds us that our own inner workings may not be so easy to replicate.
Why the climate promises of AI sound a lot like carbon offsets
There are reasonable arguments to suggest that AI tools may eventually help reduce emissions, as the IEA report underscores. But what we know for sure is that they're driving up energy demand and emissions today--especially in the regional pockets where data centers are clustering. So far, these facilities, which generally run around the clock, are substantially powered through natural-gas turbines, which produce significant levels of planet-warming emissions. Electricity demands are rising so fast that developers are proposing to build new gas plants and convert retired coal plants to supply the buzzy industry. The other thing we know is that there are better, cleaner ways of powering these facilities already, including geothermal plants, nuclear reactors, hydroelectric power, and wind or solar projects coupled with significant amounts of battery storage. The trade-off is that these facilities may cost more to build or operate, or take longer to get up and running.
The Download: AI co-creativity, and what Trump's tariffs mean for batteries
Existing generative tools can automate a striking range of creative tasks and offer near-instant gratification--but at what cost? Some artists and researchers fear that such technology could turn us into passive consumers of yet more AI slop. And so they are looking for ways to inject human creativity back into the process: working on what's known as co-creativity or more-than-human creativity. The idea is that AI can be used to inspire or critique creative projects, helping people make things that they would not have made by themselves. The aim is to develop AI tools that augment our creativity rather than strip it from us--pushing us to be better at composing music, developing games, designing toys, and much more--and lay the groundwork for a future in which humans and machines create things together.
How AI can help supercharge creativity
The audience watched, heads nodding, as Wilson tapped out code line by line on the projected screen--tweaking sounds, looping beats, pulling a face when she messed up. Wilson is a live coder. Instead of using purpose-built software like most electronic music producers, live coders create music by writing the code to generate it on the fly. It's an improvised performance art known as algorave. "It's kind of boring when you go to watch a show and someone's just sitting there on their laptop," she says.
AI companions are the final stage of digital addiction, and lawmakers are taking aim
You might think that such AI companionship bots--AI models with distinct "personalities" that can learn about you and act as a friend, lover, cheerleader, or more--appeal only to a fringe few, but that couldn't be further from the truth. A new research paper aimed at making such companions safer, by authors from Google DeepMind, the Oxford Internet Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives 20,000 queries per second, which is about a fifth of the estimated search volume served by Google. Interactions with these companions last four times longer than the average time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its active users averaged more than two hours per day conversing with bots, and that most of those users are members of Gen Z. The design of these AI characters makes lawmakers' concern well warranted.
How the Pentagon is adapting to China's technological rise
Over the past three decades, Hicks has watched the Pentagon transform--politically, strategically, and technologically. She entered government in the 1990s at the tail end of the Cold War, when optimism and a belief in global cooperation still dominated US foreign policy. After 9/11, the focus shifted to counterterrorism and nonstate actors. Then came Russia's resurgence and China's growing assertiveness. Hicks took two previous breaks from government work--the first to complete a PhD at MIT and joining the think thank Center for Strategic and International Studies (CSIS), which she later rejoined to lead its International Security Program after her second tour. "By the time I returned in 2021," she says, "there was one actor--the PRC (People's Republic of China)--that had the capability and the will to really contest the international system as it's set up."
The machines are rising -- but developers still hold the keys
This means software developers are going to become more important to how the world builds and maintains software. Yes, there are many ways their practices will evolve thanks to AI coding assistance, but in a world of proliferating machine-generated code, developer judgment and experience will be vital. Research done by GitClear earlier this year indicates that with AI coding assistants (like GitHub Copilot) going mainstream, code churn -- which GitClear defines as "changes that were either incomplete or erroneous when the author initially wrote, committed, and pushed them to the company's git repo" -- has significantly increased. GitClear also found there was a marked decrease in the number of lines of code that have been moved, a signal for refactored code (essentially the care and feeding to make it more effective). In other words, from the time coding assistants were introduced there's been a pronounced increase in lines of code without a commensurate increase in lines deleted, updated, or replaced.
The Download: brain-computer interfaces, and teaching an AI model to give therapy
Brain computer interfaces (BCIs) are electrodes put in paralyzed people's brains so they can use imagined movements to send commands from their neurons through a wire, or via radio, to a computer. In this way, they can control a computer cursor or, in few cases, produce speech. Recently, this field has taken some strides toward real practical applications. About 25 clinical trials of BCI implants are currently underway. And this year MIT Technology Review readers have selected these brain-computer interfaces as their addition to our annual list of 10 Breakthrough Technologies.