The Case That A.I. Is Thinking

The New Yorker 

The Case That A.I. Is Thinking ChatGPT does not have an inner life. Yet it seems to know what it's talking about. How convincing does the illusion of understanding have to be before you stop calling it an illusion? Dario Amodei, the C.E.O. of the artificial-intelligence company Anthropic, has been predicting that an A.I. "smarter than a Nobel Prize winner" in such fields as biology, math, engineering, and writing might come online by 2027. He envisions millions of copies of a model whirring away, each conducting its own research: a "country of geniuses in a datacenter." In June, Sam Altman, of OpenAI, wrote that the industry was on the cusp of building "digital superintelligence." "The 2030s are likely going to be wildly different from any time that has come before," he asserted. Meanwhile, the A.I. tools that most people currently interact with on a day-to-day basis are reminiscent of Clippy, the onetime Microsoft Office "assistant" that was actually more of a gadfly. A Zoom A.I. tool suggests that you ask it "What are some meeting icebreakers?" or instruct it to "Write a short message to share gratitude." Siri is good at setting reminders but not much else. A friend of mine saw a button in Gmail that said "Thank and tell anecdote." When he clicked it, Google's A.I. invented a funny story about a trip to Turkey that he never took. The rushed and uneven rollout of A.I. has created a fog in which it is tempting to conclude that there is nothing to see here--that it's all hype. There is, to be sure, plenty of hype: Amodei's timeline is science-fictional.