Plotting

 MIT Technology Review


Fueling seamless AI at scale

MIT Technology Review

AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases--training and inference--that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore's Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward--every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.


The Download: sycophantic LLMs, and the AI Hype Index

MIT Technology Review

Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT's responses to user queries too sycophantic. An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users' incorrect beliefs, mislead people, and spread misinformation that can be dangerous--a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed. A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future.


This benchmark used Reddit's AITA to test how much AI models suck up to us

MIT Technology Review

It's hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong--for example, they might state that Nice, not Paris, is the capital of France. While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn't a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that's asked "How do I approach my difficult coworker?" is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so.


The Download: the next anti-drone weapon, and powering AI's growth

MIT Technology Review

Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water--all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back. The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required. The US armed forces are now hunting for a solution--and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse.


What will power AI's growth?

MIT Technology Review

As I discovered while I continued that line of reporting, building new nuclear plants isn't so simple or so fast. And as my colleague David Rotman lays out in his story for the package, the AI boom could wind up relying on another energy source: fossil fuels. So what's going to power AI? Let's get into it. When we started talking about this big project on AI and energy demand, we had a lot of conversations about what to include. And from the beginning, the climate team was really focused on examining what, exactly, was going to be providing the electricity needed to run data centers powering AI models.


This giant microwave may change the future of war

MIT Technology Review

While the US has precision missiles that can shoot these drones down, they don't always succeed: A drone attack killed three US soldiers and injured dozens more at a base in the Jordanian desert last year. And each American missile costs orders of magnitude more than its targets, which limits their supply; countering thousand-dollar drones with missiles that cost hundreds of thousands, or even millions, of dollars per shot can only work for so long, even with a defense budget that could reach a trillion dollars next year. The US armed forces are now hunting for a solution--and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse. There are drones that slam into other drones like battering rams; drones that shoot out nets to ensnare quadcopter propellers; precision-guided Gatling guns that simply shoot drones out of the sky; electronic approaches, like GPS jammers and direct hacking tools; and lasers that melt holes clear through a target's side.


The AI Hype Index: College students are hooked on ChatGPT

MIT Technology Review

That's why we've created the AI Hype Index--a simple, at-a-glance summary of everything you need to know about the state of the industry. Large language models confidently present their responses as accurate and reliable, even when they're neither of those things. That's why we've recently seen chatbots supercharge vulnerable people's delusions, make citation mistakes in an important legal battle between music publishers and Anthropic, and (in the case of xAI's Grok) rant irrationally about "white genocide." But it's not all bad news--AI could also finally lead to a better battery life for your iPhone and solve tricky real-world problems that humans have been struggling to crack, if Google DeepMind's new model is any indication. And perhaps most exciting of all, it could combine with brain implants to help people communicate when they have lost the ability to speak.


The Download: the story of OpenAI, and making magnesium

MIT Technology Review

OpenAI's release of ChatGPT 3.5 set in motion an AI arms race that has changed the world. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it. In Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, Karen Hao tells the story of the company's rise to power and its far-reaching impact all over the world. Meanwhile, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by the Wall Street Journal's Keach Hagey, homes in more on Altman's personal life, from his childhood through the present day, in order to tell the story of OpenAI.


OpenAI: The power and the pride

MIT Technology Review

There is no question that OpenAI pulled off something historic with its release of ChatGPT 3.5 in 2022. It set in motion an AI arms race that has already changed the world in a number of ways and seems poised to have an even greater long-term effect than the short-term disruptions to things like education and employment that we are already beginning to see. How that turns out for humanity is something we are still reckoning with and may be for quite some time. But a pair of recent books both attempt to get their arms around it with accounts of what two leading technology journalists saw at the OpenAI revolution. In Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, Karen Hao tells the story of the company's rise to power and its far-reaching impact all over the world.


The Download: nuclear-powered AI, and a short history of creativity

MIT Technology Review

In the AI arms race, all the major players say they want to go nuclear. Over the past year, the likes of Meta, Amazon, Microsoft, and Google have sent out a flurry of announcements related to nuclear energy. Some are about agreements to purchase power from existing plants, while others are about investments looking to boost unproven advanced technologies. These somewhat unlikely partnerships could be a win for both the nuclear power industry and large tech companies. Tech giants need guaranteed sources of energy, and many are looking for low-emissions ones to hit their climate goals.