Goto

Collaborating Authors

Apple cofounder Steve Wozniak dismisses AI concerns raised by the likes of Stephen Hawking and Nick Bostrom

#artificialintelligence

PayPal billionaire Elon Musk, Microsoft cofounder Bill Gates, and renowned scientist Stephen Hawking have called out artificial intelligence (AI) as one of the biggest threats to humanity's very existence. But Apple cofounder Steve Wozniak told Business Insider in an interview this week that he's not concerned about AI. He said he reversed his thinking on AI for several reasons. "One being that Moore's Law isn't going to make those machines smart enough to think really the way a human does," said Wozniak. "Another is when machines can out think humans they can't be as intuitive and say what will I do next and what is an approach that might get me there.


Ethics of Artificial Intelligence Yann LeCun, Nick Bostrom & Virginia Dignum

#artificialintelligence

Want to watch this again later? Sign in to add this video to a playlist. Report Need to report the video? Sign in to report inappropriate content. Report Need to report the video?


Nick Bostrom: What happens when our computers get smarter than we are?

#artificialintelligence

Artificial intelligence is getting smarter by leaps and bounds -- within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values -- or will they have values of their own? TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less).


What happens when our computers get smarter than we are? - RBLS.

#artificialintelligence

Humans tend to think of the modern human condition as normal, says philosopher Nick Bostrom. But the reality is that human beings arrived on the planetary scene only recently – and from an evolutionary perspective, there's no guarantee that we'll continue to reign over the planet. One big question worth asking: when artificial intelligence advances beyond human intelligence (and Bostrom suggests it will happen faster than we think), how can we ensure that it advances human knowledge instead of wiping out humanity? "We should not be confident in our ability to keep a superintelligent genie locked up in its bottle," he says. In any event, the time to think about instilling human values in artificial intelligence is now, not later.


Maximizing paper clips

#artificialintelligence

In What's the Future, Tim O'Reilly argues that our world is governed by automated systems that are out of our control. Alluding to The Terminator, he says we're already in a "Skynet moment," dominated by artificial intelligence that can no longer be governed by its "former masters." The systems that control our lives optimize for the wrong things: they're carefully tuned to maximize short-term economic gain rather than long-term prosperity. The "flash crash" of 2010 was an economic event created purely by the software that runs our financial systems going awry. However, the real danger of the Skynet moment isn't what happens when the software fails, but when it is working properly: when it's maximizing short-term shareholder value, without considering any other aspects of the world we live in.