Goto

Collaborating Authors

 kaplan


Human head transplants' gory, Frankenstein-esque history

Popular Science

Breakthroughs, discoveries, and DIY tips sent six days a week. In Mary Shelley's, a mad scientist creates a monstrous creature with severed body parts. In certain film adaptations, a dismembered head is tacked onto the malformed body. Then, with the help of a lightning storm, a new life is born. From the first successful kidney transplant in 1954, modern organ transplantation has often been linked to the horrors of Frankenstein .


'The biggest decision yet': Jared Kaplan on allowing AI to train itself

The Guardian

'The biggest decision yet': Jared Kaplan on allowing AI to train itself Anthropic's chief scientist says AI autonomy could spark a beneficial'intelligence explosion' - or be the moment humans lose control Humanity will have to decide by 2030 whether to take the "ultimate risk" of letting artificial intelligence systems train themselves to become more powerful, one of the world's leading AI scientists has said. Jared Kaplan, the chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve. The move could trigger a beneficial "intelligence explosion" - or be the moment humans end up losing control. In an interview about the intensely competitive race to reach artificial general intelligence (AGI) - sometimes called superintelligence - Kaplan urged international governments and society to engage in what he called "the biggest decision". Anthropic is part of a pack of frontier AI companies including OpenAI, Google DeepMind, xAI, Meta and Chinese rivals led by DeepSeek, racing for AI dominance. Its widely used AI assistant, Claude, has become particularly popular among business customers.


On the Origin of Algorithmic Progress in AI

Gundlach, Hans, Fogelson, Alex, Lynch, Jayson, Trisovic, Ana, Rosenfeld, Jonathan, Sandhu, Anmol, Thompson, Neil

arXiv.org Artificial Intelligence

Algorithms have been estimated to increase AI training FLOP efficiency by a factor of 22,000 between 2012 and 2023 [Ho et al., 2024]. Running small-scale ablation experiments on key innovations from this time period, we are able to account for less than 10x of these gains. Surveying the broader literature, we estimate that additional innovations not included in our ablations account for less than 10x, yielding a total under 100x. This leads us to conduct scaling experiments, which reveal that much of this efficiency gap can be explained by algorithms with scale-dependent efficiency improvements. In particular, we conduct scaling experiments between LSTMs and Transformers, finding exponent differences in their compute-optimal scaling law while finding little scaling difference for many other innovations. These experiments demonstrate that - contrary to standard assumptions - an algorithm's efficiency gains are tied to compute scale. Using experimental extrapolation and literature estimates, we account for 6,930x efficiency gains over the same time period, with the scale-dependent LSTM-to-Transformer transition accounting for the majority of gains. Our results indicate that algorithmic progress for small models has been far slower than previously assumed, and that measures of algorithmic efficiency are strongly reference-dependent.


At TIME100 Impact Dinner, AI Leaders Talk How AI Can Transform Business

TIME - Tech

Artificial intelligence is transforming the business world in ways we couldn't have imagined until recently. Just how--and what the future holds--was the topic of a panel discussion at the TIME100 Impact Dinner: Leaders Shaping the Future of AI in San Francisco on Monday moderated by TIME's executive editor Nikhil Kumar. The panelists were Ravi Kumar S, CEO of Cognizant, which sponsored the event; Athina Kanioura, chief strategy and transformation officer at PepsiCo, which also sponsored the event; and Jared Kaplan, co-founder and chief science officer at Anthropic . Ravi Kumar and Kaplan both featured on the 2025 TIME100 AI list, which highlights the 100 most influential people in AI this year, from computer scientists to business leaders to policy makers and artists. "One of the things about the public conversation about AI is that quite often it is focused on the companies behind the technology and what they are doing," said TIME's Nikhil Kumar when introducing the panel.


'It's going to be really bad': Fears over AI bubble bursting grow in Silicon Valley

BBC News

'It's going to be really bad': Fears over AI bubble bursting grow in Silicon Valley At OpenAI's DevDay this week, OpenAI boss Sam Altman did what American tech bosses rarely do these days: he actually answered questions from reporters. I know it's tempting to write the bubble story, Mr Altman told me as he sat flanked by his top lieutenants. In fact, there are many parts of AI that I think are kind of bubbly right now. In Silicon Valley, the debate over whether AI companies are overvalued has taken on a new urgency. Sceptics are privately - and some now publicly - asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call financial engineering.


What If A.I. Doesn't Get Much Better Than This?

The New Yorker

For this week's Open Questions column, Cal Newport is filling in for Joshua Rothman. Much of the euphoria and dread swirling around today's artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled "Scaling Laws for Neural Language Models." The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training? Back then, many machine-learning experts thought that, after they had reached a certain size, language models would effectively start memorizing the answers to their training questions, which would make them less useful once deployed.


Anthropic's chief scientist on 5 ways agents will be even better in 2025

MIT Technology Review

In October, Anthropic showed off one of the most advanced agents yet: an extension of its Claude large language model called computer use. As the name suggests, it lets you direct Claude to use a computer much as a person would, by moving a cursor, clicking buttons, and typing text. Instead of simply having a conversation with Claude, you can now ask it to carry out on-screen tasks for you. Anthropic notes that the feature is still cumbersome and error-prone. But it is already available to a handful of testers, including third-party developers at companies such as DoorDash, Canva, and Asana. Computer use is a glimpse of what's to come for agents.


She didn't get an apartment because of an AI-generated score – and sued to help others avoid the same fate

The Guardian

That was the score Mary Louis was given by an AI-powered tenant screening tool. The software, SafeRent, didn't explain in its 11-page report how the score was calculated or how it weighed various factors. It didn't say what the score actually signified. It just displayed Louis's number and determined it was too low. Louis, who works as a security guard, had applied for an apartment in an eastern Massachusetts suburb.


What Serial Daters and Matchmakers Alike Think We Should Do About Our Dating Crisis

Slate

The singles of today's dating horticulture are not happy. A vast majority of young people report they're burned out by app dating, and many are also struggling to date IRL. Even with the bevy of options and tools we have abetted around dating, singles from their 20s to their 40s told me that finding meaningful, long-term relationships is becoming harder than ever. There's no one reason, but a culmination of many factors that result in the current perilous state of modern courtship: A loneliness epidemic has been exacerbated by the COVID pandemic, which has led to poor socializing skills, and there's a surplus of unvetted suitors that the major dating apps like Tinder, Hinge, and Bumble push onto users while hiding their most authentic matches behind a paywall. Serious intention is also seriously lacking in today's climate.


Anthropic Wants Its AI Agent to Control Your Computer

WIRED

It took a while for people to adjust to the idea of chatbots that seem to have minds of their own. The next leap into the unknown may involve trusting artificial intelligence to take over our computers, too. Anthropic, a high-flying competitor to OpenAI, announced today that it has taught its AI model Claude to do a range of things on a computer, including search the web, open applications, and input text using the mouse and keyboard. "I think we're going to enter into a new era where a model can use all of the tools that you use as a person to get tasks done," says Jared Kaplan, chief science officer at Anthropic and an associate professor at Johns Hopkins University. Kaplan showed WIRED a prerecorded demo in which an "agentic"--or tool-using--version of Claude had been asked to help plan an outing to see the sunrise at the Golden Gate Bridge with a friend.