Artificial intelligence is getting smarter by leaps and bounds -- within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values -- or will they have values of their own?
Led by Founding Director Prof. Nick Bostrom, the Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects. The Future of Humanity Institute's mission is to shed light on crucial considerations for humanity's future. We seek to focus our work where we can make the greatest positive difference. Prof. Nick Bostrom's New York Times best seller Superintelligence: Paths, Dangers, Strategies, provides an introduction to our work on the long-term implications of artificial intelligence.
The idea of a paperclip-making AI didn't originate with Lantz. Most people ascribe it to Nick Bostrom, a philosopher at Oxford University and the author of the book Superintelligence. The New Yorker (owned by Condé Nast, which also owns Wired) called Bostrom "the philosopher of doomsday," because he writes and thinks deeply about what would happen if a computer got really, really smart. Not, like, "wow, Alexa can understand me when I ask it to play NPR" smart, but like really smart.
It's predicted that sometime in the next 25 years, artificial intelligence machines will match - and in some ways - surpass human intelligence. The potential ripple effects of that are staggering. Corporations and governments are now spending billions of dollars on developing bigger and smarter A.I. technology. Their goal is to create machines that think for themselves. But some warn it could go terribly wrong, and the warning is being sounded by the likes of Bill Gates, Elon Musk and Stephen Hawking.
Economics correspondent Paul Solman and Swedish philosopher Nick Bostrom discuss existential threats such as nuclear winter and how the biggest threat to humanity may be what we don't yet know. Editor's note: Economics correspondent Paul Solman recently traveled to Oxford University's Future of Humanity Institute. And yes, there is an institute that studies only that -- the future of the human species. In PBS NewsHour's Thursday Making Sen$e report, Paul speaks with the institute's founding director Nick Bostrom, a Swedish philosopher known for his work on artificial intelligence and existential threats. You can watch Bostrom's TED talk on "superintelligence" -- what happens when computers become smarter than humans -- here.