We're Focusing on the Wrong Kind of AI Apocalypse
Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse. There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)--an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human control--or worse (the movies Terminator and 2001 come to mind). Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI.
Apr-1-2024, 11:00:00 GMT