We hear a lot about AI and its transformative potential. What that means for the future of humanity, however, is not altogether clear. Some futurists believe life will be improved, while others think it is under serious threat. Here's a range of takes from 11 experts. Join nearly 200,000 subscribers who receive actionable tech insights from Techopedia.
On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists they included a comedian and a former Miss America had to guess what it was. On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil then demonstrated the computer, which he built himself a desk-size affair with loudly clacking relays, hooked up to a typewriter.
In recent months, several prominent champions of technology -- Bill Gates, Stephen Hawking, and Tesla founder Elon Musk among them -- have declared that the greatest threat to humankind is not climate change, nuclear warfare, religious fanaticism or bacterial superbugs. No, according to these famous forward-thinkers, the threat we should really be worried about is advanced artificial intelligence. That is, we should be worried about supersmart robots and computers guided by a globally networked über-entity that will one day be able to outlearn, outthink and outcompete the human species and send it hurtling toward extinction. Late last year, Hawking, a physicist, came right out and told the BBC: "The development of full artificial intelligence could spell the end of the human race." In a recent symposium at the Massachusetts Institute of Technology, Tesla's Musk said that creating advanced artificial intelligence was "summoning the demon."
Our existence as a species is, in all likelihood, limited. Whether the downfall of the human race begins as a result of a devastating asteroid impact, a natural pandemic, or an all-out nuclear war, we are facing a number of risks to our future, ranging from the vastly remote to the almost inevitable. Global catastrophic events like these would, of course, be devastating for our species. Even if nuclear war obliterates 99% of the human race however, the surviving 1% could feasibly recover, and even thrive years down the line, with no lasting damage to our species' potential. There are some events that there's no coming back from though.
As humanity stands on the brink of a technology triggered information revolution, the scale, scope and complexity of the impact of intelligence evolution in machines is unlike anything humankind has experienced before. As a result, the speed at which the ideas, innovations and inventions are emerging on the back of artificial intelligence has no historical precedent and is fundamentally disrupting everything in the human ecosystem. In addition, the breadth, depth and impact of this intelligence evolution on furthering of ideas and innovations across cyberspace, geospace and space (CGS) herald the fundamental transformation of entire interconnected and interdependent systems of basic and applied science: research and development, concept to commercialization, politics to governance, socialization to capitalism, education to training, production to markets, survival to security and more. The technology triggered intelligence evolution in machines and the linkages between ideas, innovations and trends have in fact brought us on the doorsteps of singularity. Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity.