MIT Technology Review
This is the most misunderstood graph in AI
To some, METR's "time horizon plot" indicates that AI utopia--or apocalypse--is close at hand. The truth is more complicated. Every time OpenAI, Google, or Anthropic drops a new frontier large language model, the AI community holds its breath. It doesn't exhale until METR, an AI research nonprofit whose name stands for "Model Evaluation & Threat Research," updates a now-iconic graph that has played a major role in the AI discourse since it was first released in March of last year. The graph suggests that certain AI capabilities are developing at an exponential rate, and more recent model releases have outperformed that already impressive trend. That was certainly the case for Claude Opus 4.5, the latest version of Anthropic's most powerful model, which was released in late November.
- North America > United States > Massachusetts (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Asia > China (0.04)
From guardrails to governance: A CEO's guide for securing agentic systems
A practical blueprint for companies and CEOs that shows how to secure agentic systems by shifting from prompt tinkering to hard controls on identity, tools, and data. The previous article in this series, " Rules fail at the prompt, succeed at the boundary," focused on the first AI-orchestrated espionage campaign and the failure of prompt-level control. This article is the prescription. Across recent AI security guidance from standards bodies, regulators, and major providers, a simple idea keeps repeating: treat agents like powerful, semi-autonomous users, and enforce rules at the boundaries where they touch identity, tools, data, and outputs. These steps help define identity and limit capabilities. Today, agents run under vague, over-privileged service identities.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
The Download: the future of nuclear power plants, and social media-fueled AI hype
AI is driving unprecedented investment for massive data centers and an energy supply that can support its huge computational appetite. One potential source of electricity for these facilities is next-generation nuclear power plants, which could be cheaper to construct and safer to operate than their predecessors. We recently held a subscriber-exclusive Roundtables discussion on hyperscale AI data centers and next-gen nuclear --two featured technologies on the MIT Technology Review 10 Breakthrough Technologies of 2026 list . You can watch the conversation back here, and don't forget to subscribe to make sure you catch future discussions as they happen. Demis Hassabis, CEO of Google DeepMind, summed it up in three words: "This is embarrassing." Hassabis was replying on X to an overexcited post by Sébastien Bubeck, a research scientist at the rival firm OpenAI, announcing that two mathematicians had used OpenAI's latest large language model, GPT-5, to find solutions to 10 unsolved problems in mathematics.
- Oceania > Australia (0.05)
- North America > United States > New York (0.05)
- North America > United States > Massachusetts (0.05)
- (7 more...)
The Download: squeezing more metal out of aging mines, and AI's truth crisis
In a pine forest on Michigan's Upper Peninsula, the only active nickel mine in the US is nearing the end of its life. At a time when carmakers want the metal for electric-vehicle batteries, nickel concentration at Eagle Mine is falling and could soon drop too low to warrant digging. Demand for nickel, copper, and rare earth elements is rapidly increasing amid the explosive growth of metal-intensive data centers, electric cars, and renewable energy projects. But producing these metals is becoming harder and more expensive because miners have already exploited the best resources. Here's how biotechnology could help . What we've been getting wrong about AI's truth crisis What would it take to convince you that the era of truth decay we were long warned about--where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process--is now here?
- North America > United States > Michigan (0.25)
- Asia > China (0.07)
- Europe > Russia (0.06)
- (5 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Materials > Metals & Mining (1.00)
- (2 more...)
What we've been getting wrong about AI's truth crisis
What we've been getting wrong about AI's truth crisis Even when content is revealed to be manipulated, it still shapes our beliefs. The defenders of truth are hopelessly behind. What would it take to convince you that the era of truth decay we were long warned about--where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process--is now here? A story I published last week pushed me over the edge. It also made me realize that the tools we were sold as a cure for this crisis are failing miserably. On Thursday, I reported the first confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the public.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.05)
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
The crucial first step for designing a successful enterprise AI system
How to identify the first iconic use case for an enterprise AI transformation. Many organizations rushed into generative AI, only to see pilots fail to deliver value . Now, companies want measurable outcomes--but how do you design for success? At Mistral AI, we partner with global industry leaders to co-design tailored AI solutions that solve their most difficult problems. Whether it's increasing CX productivity with Cisco, building a more intelligent car with Stellantis, or accelerating product innovation with ASML, we start with open frontier models and customize AI systems to deliver impact for each company's unique challenges and goals. Our methodology starts by identifying an iconic use case, the foundation for AI transformation that sets the blueprint for future AI solutions.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
The Download: inside a deepfake marketplace, and EV batteries' future
Civitai--an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz--is letting users buy custom instruction files for generating celebrity deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found. The study, from researchers at Stanford and Indiana University, looked at people's requests for content on the site, called "bounties." The researchers found that between mid-2023 and the end of 2024, most bounties asked for animated content--but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women. Demand for electric vehicles and the batteries that power them has never been hotter. In 2025, EVs made up over a quarter of new vehicle sales globally, up from less than 5% in 2020.
- North America > United States > Indiana (0.25)
- Europe (0.15)
- Asia > China (0.08)
- (3 more...)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (1.00)
Inside the marketplace powering bespoke AI deepfakes of real women
New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned. Civitai--an online marketplace for buying and selling AI-generated content, backed by the venture capital firm Andreessen Horowitz--is letting users buy custom instruction files for generating celebrity deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found. The study, from researchers at Stanford and Indiana University, looked at people's requests for content on the site, called "bounties." The researchers found that between mid-2023 and the end of 2024, most bounties asked for animated content--but a significant portion were for deepfakes of real people, and 90% of these deepfake requests targeted women. The debate around deepfakes, as illustrated by the recent backlash to explicit images on the X-owned chatbot Grok, has revolved around what platforms should do to block such content.
- North America > United States > Indiana (0.25)
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Law (0.98)
The Download: US immigration agencies' AI videos, and inside the Vitalism movement
Plus: French company Capgemini has confirmed it's no longer working with ICE The US Department of Homeland Security is using AI video generators from Google and Adobe to make and edit content shared with the public, a new document reveals. The document, released on Wednesday, provides an inventory of which commercial AI tools DHS uses for tasks ranging from generating drafts of documents to managing cybersecurity. It comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda--some of which appears to be made with AI--and as workers in tech have put pressure on their employers to denounce the agencies' activities. For the last couple of years, I've been following the progress of a group of individuals who believe death is humanity's "core problem." Put simply, they say death is wrong--for everyone. They've even said it's morally wrong.
- Asia > China (0.08)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Massachusetts (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.51)
The AI Hype Index: Grok makes porn, and Claude Code nails your job
Everyone is panicking because AI is very bad; everyone is panicking because AI is very good. It's just that you never know which one you're going to get. Grok is a pornography machine. Claude Code can do anything from building websites to reading your MRI. So of course Gen Z is spooked by what this means for jobs. Unnerving new research says AI is going to have a seismic impact on the labor market this year.