Goto

Collaborating Authors

 mit technology review featured topic


The Download: OpenAI is building a fully automated researcher, and a psychedelic trial blind spot

MIT Technology Review

Plus: OpenAI is also creating a super app. OpenAI has a new grand challenge: building an AI researcher--a fully automated agent-based system capable of tackling large, complex problems by itself. The San Francisco firm said the new goal will be its "north star" for the next few years. By September, the company plans to build "an autonomous AI research intern" that can take on a small number of specific research problems. The intern will be the precursor to the fully automated multi-agent system, which is slated to debut in 2028. In an exclusive interview this week, OpenAI's chief scientist, Jakub Pachocki, talked me through the plans.


OpenAI is throwing everything into building a fully automated researcher

MIT Technology Review

OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that this new research goal will be its "North Star" for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability .


Mind-altering substances are (still) falling short in clinical trials

MIT Technology Review

Placebo and "knowcebo" effects are a problem. But they can also help people feel better. This week I want to look at where we are with psychedelics, the mind-altering substances that have somehow made the leap from counterculture to major focus of clinical research. Compounds like psilocybin--which is found in magic mushrooms--are being explored for all sorts of health applications, including treatments for depression, PTSD, addiction, and even obesity. Over the last decade, we've seen scientific interest in these drugs explode. But most clinical trials of psychedelics have been small and plagued by challenges.


The Download: Quantum computing for health, and why the world doesn't recycle more nuclear waste

MIT Technology Review

The Download: Quantum computing for health, and why the world doesn't recycle more nuclear waste Plus: The FBI has admitted it's buying Americans' location data. In a laboratory on the outskirts of Oxford, a quantum computer built from atoms and light awaits its moment. The device is small but powerful--and also very valuable. Infleqtion, the company that owns it, is hoping its abilities will win $5 million at a competition next week. The prize will go to the quantum computer that can solve real health care problems that conventional "classical" computers are unable to solve. But there can be only one big winner--if there is a winner at all.


Can quantum computers now solve health care problems? We'll soon find out.

MIT Technology Review

I'm standing in front of a quantum computer built out of atoms and light at the UK's National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik's Cube-size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam. The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I'd be unlikely to get very far, though.


The Download: The Pentagon's new AI plans, and next-gen nuclear reactors

MIT Technology Review

The Download: The Pentagon's new AI plans, and next-gen nuclear reactors Plus: The OpenClaw frenzy has led to a new Nvidia product. The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks. It would also bring AI firms closer to classified data than ever before. What do new nuclear reactors mean for waste?


The Pentagon is planning for AI companies to train on classified data, defense official says

MIT Technology Review

The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .


The Download: OpenAI's US military deal, and Grok's CSAM lawsuit

MIT Technology Review

Plus: China has approved the world's first commercial brain chip. Where OpenAI's technology could show up in Iran OpenAI has controversially agreed to give the Pentagon access to its AI. But where exactly could its tech show up, and which applications will its customers and employees tolerate? There's pressure to integrate it quickly with existing military tools. One defense official revealed it could even assist in selecting strike targets. OpenAI's partnership with Anduril, which makes drones and counter-drone technologies, adds another hint at what is to come.


Where OpenAI's technology could show up in Iran

MIT Technology Review

Where OpenAI's technology could show up in Iran Three places to watch, from the margins of war to the center of combat. It's been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI's agreement allows for; Sam Altman said the military can't use his company's technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI's other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious . It's not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it's just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads).


Nurturing agentic AI beyond the toddler stage

MIT Technology Review

The promise of autonomous agentic AI requires significant changes in the governance landscape. Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child's first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub.