What If A.I. Doesn't Get Much Better Than This?

The New Yorker 

For this week's Open Questions column, Cal Newport is filling in for Joshua Rothman. Much of the euphoria and dread swirling around today's artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled "Scaling Laws for Neural Language Models." The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training? Back then, many machine-learning experts thought that, after they had reached a certain size, language models would effectively start memorizing the answers to their training questions, which would make them less useful once deployed.