Welcome to AI book reviews, a series of posts that explore the latest literature on artificial intelligence. The late Stephen Hawking called artificial intelligence the biggest threat to humanity. But Hawking, albeit a revered physicist, was not a computer scientist. Elon Musk compared AI adoption to "summoning the devil." But Elon is, well, Elon.
Twenty years ago, Stuart Russell co-wrote a book titled Artificial Intelligence: A Modern Approach (AIMA), destined to become the dominant text in its field. Near the end of the book, he posed a question: "What if A.I. does succeed?" Today, progress toward human-level artificial intelligence (A.I.) is advancing rapidly, and Russell, a professor of computer science, is posing the same question with more urgency. The benefits of A.I. are not at issue. If improperly constrained, Russell warns, a machine as smart as or smarter than humans "is of no use whatsoever -- in fact it's catastrophic."
In Human Compatible, his new book on artificial intelligence (AI), Stuart Russell confronts full on what he calls "the problem of control". That is, the possibility that general-purpose AI will ultimately eclipse the intellectual capacities of its creators, to irreversible dystopian effect. The control problem is not new. But, by 1950, Norbert Wiener, the inventor of cybernetics, was writing (in The Human Use of Human Beings) that the danger to society "is not from the machine itself but from what man makes of it". Russell's book in effect hangs on this tension: whether the problem is controlling the creature, or the creator.
Slate is currently running a feature called "Future Tense," which claims to be the "citizens guide to the future." Two of their recent articles, however, are full of inaccuracies about AI safety and the researchers studying it. While this is disappointing, it also represents a good opportunity to clear up some misconceptions about why AI safety research is necessary. The first contested article was Let Artificial Intelligence Evolve, by Michael Chorost, which displays a poor understanding of the issues surrounding the evolution of artificial intelligence. The second, How to be Good, by Adam Elkus, got some of the concerns about developing safe AI correct, but, in the process, did great disservice to one of today's most prominent AI safety researchers, as well as to scientific research in general.
At a recent meeting of the World Economic Forum, someone asked Stuart Russell, a professor of computer science at the University of California, Berkeley, when superintelligent artificial intelligence (AI) might arrive. He loosely estimated it to be within his children's lifetime, and then he emphasized the Chatham House rules of the meeting and that his conjecture was "strictly off the record." But, he writes in his new book Human Compatible: Artificial Intelligence and the Problem of Control, "Less than two hours later, an article appeared in the Daily Telegraph citing Professor Russell's remarks, complete with images of rampaging Terminator robots." Hyperbole by many media outlets has made it challenging for experts to talk seriously about the dangers of artificial superintelligence--a technology that would surpass the intellectual capabilities of humans. Nonetheless, many experts have written books on the subject.