Goto

Collaborating Authors

11 Quotes About AI That'll Make You Think

#artificialintelligence

We hear a lot about AI and its transformative potential. What that means for the future of humanity, however, is not altogether clear. Some futurists believe life will be improved, while others think it is under serious threat. Here's a range of takes from 11 experts. Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.


2045: The Year Man Becomes Immortal

#artificialintelligence

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists they included a comedian and a former Miss America had to guess what it was. On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil then demonstrated the computer, which he built himself a desk-size affair with loudly clacking relays, hooked up to a typewriter.


How the machines will take over

#artificialintelligence

In recent months, several prominent champions of technology -- Bill Gates, Stephen Hawking, and Tesla founder Elon Musk among them -- have declared that the greatest threat to humankind is not climate change, nuclear warfare, religious fanaticism or bacterial superbugs. No, according to these famous forward-thinkers, the threat we should really be worried about is advanced artificial intelligence. That is, we should be worried about supersmart robots and computers guided by a globally networked über-entity that will one day be able to outlearn, outthink and outcompete the human species and send it hurtling toward extinction. Late last year, Hawking, a physicist, came right out and told the BBC: "The development of full artificial intelligence could spell the end of the human race." In a recent symposium at the Massachusetts Institute of Technology, Tesla's Musk said that creating advanced artificial intelligence was "summoning the demon."


'AI could send us back to the stone age': In conversation with the End Of The World

#artificialintelligence

Our existence as a species is, in all likelihood, limited. Whether the downfall of the human race begins as a result of a devastating asteroid impact, a natural pandemic, or an all-out nuclear war, we are facing a number of risks to our future, ranging from the vastly remote to the almost inevitable. Global catastrophic events like these would, of course, be devastating for our species. Even if nuclear war obliterates 99% of the human race however, the surviving 1% could feasibly recover, and even thrive years down the line, with no lasting damage to our species' potential. There are some events that there's no coming back from though.


Fear our new robot overlords: This is why you need to take artificial intelligence seriously

#artificialintelligence

There are a lot of major problems today with tangible, real-world consequences. A short list might include terrorism, U.S.-Russian relations, climate change and biodiversity loss, income inequality, health care, childhood poverty, and the homegrown threat of authoritarian populism, most notably associated with the presumptive nominee for the Republican Party, Donald Trump. Yet if you've been paying attention to the news for the past several years, you've almost certainly seen articles from a wide range of news outlets about the looming danger of artificial general intelligence, or "AGI." For example, Stephen Hawking has repeatedly expressed that "the development of full artificial intelligence could spell the end of the human race," and Elon Musk -- of Tesla and SpaceX fame -- has described the creation of superintelligence as "summoning the demon." Furthermore, the Oxford philosopher and director of the Future of Humanity Institute, Nick Bostrom, published a New York Times best-selling book in 2014 called Superintelligence, in which he suggests that the "default outcome" of building a superintelligent machine will be "doom."