Goto

Collaborating Authors

Apple cofounder Steve Wozniak dismisses AI concerns raised by the likes of Stephen Hawking and Nick Bostrom

#artificialintelligence

PayPal billionaire Elon Musk, Microsoft cofounder Bill Gates, and renowned scientist Stephen Hawking have called out artificial intelligence (AI) as one of the biggest threats to humanity's very existence. But Apple cofounder Steve Wozniak told Business Insider in an interview this week that he's not concerned about AI. He said he reversed his thinking on AI for several reasons. "One being that Moore's Law isn't going to make those machines smart enough to think really the way a human does," said Wozniak. "Another is when machines can out think humans they can't be as intuitive and say what will I do next and what is an approach that might get me there.


Nick Bostrom The Ethics of The Artificial Intelligence Revolution

@machinelearnbot

Want to watch this again later? Sign in to add this video to a playlist. Report Need to report the video? Sign in to report inappropriate content. Report Need to report the video?



What happens when our computers get smarter than we are? - RBLS.

#artificialintelligence

Humans tend to think of the modern human condition as normal, says philosopher Nick Bostrom. But the reality is that human beings arrived on the planetary scene only recently – and from an evolutionary perspective, there's no guarantee that we'll continue to reign over the planet. One big question worth asking: when artificial intelligence advances beyond human intelligence (and Bostrom suggests it will happen faster than we think), how can we ensure that it advances human knowledge instead of wiping out humanity? "We should not be confident in our ability to keep a superintelligent genie locked up in its bottle," he says. In any event, the time to think about instilling human values in artificial intelligence is now, not later.


A Viral Game About Paperclips Teaches You to Be a World-Killing AI

WIRED

The idea of a paperclip-making AI didn't originate with Lantz. Most people ascribe it to Nick Bostrom, a philosopher at Oxford University and the author of the book Superintelligence. The New Yorker (owned by Condé Nast, which also owns Wired) called Bostrom "the philosopher of doomsday," because he writes and thinks deeply about what would happen if a computer got really, really smart. Not, like, "wow, Alexa can understand me when I ask it to play NPR" smart, but like really smart.