Here's why we need to start thinking of AI as "normal"
Instead, according to the researchers, AI is a general-purpose technology whose application might be better compared to the drawn-out adoption of electricity or the internet than to nuclear weapons--though they concede this is in some ways a flawed analogy. The core point, Kapoor says, is that we need to start differentiating between the rapid development of AI methods--the flashy and impressive displays of what AI can do in the lab--and what comes from the actual applications of AI, which in historical examples of other technologies lag behind by decades. "Much of the discussion of AI's societal impacts ignores this process of adoption," Kapoor told me, "and expects societal impacts to occur at the speed of technological development." In other words, the adoption of useful artificial intelligence, in his view, will be less of a tsunami and more of a trickle. In the essay, the pair make some other bracing arguments: terms like "superintelligence" are so incoherent and speculative that we shouldn't use them; AI won't automate everything but will birth a category of human labor that monitors, verifies, and supervises AI; and we should focus more on AI's likelihood to worsen current problems in society than the possibility of it creating new ones.
Apr-29-2025, 09:00:00 GMT