Looking over the year that has passed, it is a nice question whether human stupidity or artificial intelligence has done more to shape events. Perhaps it is the convergence of the two that we really need to fear. Artificial intelligence is a term whose meaning constantly recedes. Computers, it turns out, can do things that only the cleverest humans once could. But at the same time they fail at tasks that even the stupidest humans accomplish without conscious difficulty.
This article is reproduced with kind permission of Spiegel Online, where it first appeared. The author was told to make the series personal, describe the development of chess programming not as an academic treatise but as a personal story of how he had experienced it. For some ChessBase readers a number of the passages will be familiar, since the stories have been told before on our pages. For others this can serve as a roadmap through one of the great scientific endeavors of our time. It was the mid 1990s.
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Last week, in an essay for The New York Times, famous mathematician Steven Strogatz praised the recently published performance results of AlphaZero, the board game–playing AI developed by DeepMind, a British AI company acquired by Google in 2014. While his examination of AlphaZero's findings is an interesting read, some of the conclusions Strogatz draws about the general advances in AI are problematic. "[AlphaZero] clearly displays a breed of intellect that humans have not seen before, and that we will be mulling over for a long time to come," Strogatz writes early in the article. Further down, Strogatz writes, "By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever."
For humans, chess may take a lifetime to master. But Google DeepMind's new artificial intelligence program, AlphaZero, can teach itself to conquer the board in a matter of hours. Building on its past success with the AlphaGo suite--a series of computer programs designed to play the Chinese board game Go--Google boasts that its new AlphaZero achieves a level of "superhuman performance" at not just one board game, but three: Go, chess, and shogi (essentially, Japanese chess). The team of computer scientists and engineers, led by Google's David Silver, reported its findings recently in the journal Science. "Before this, with machine learning, you could get a machine to do exactly what you want--but only that thing," says Ayanna Howard, an expert in interactive computing and artificial intelligence at the Georgia Institute of Technology who did not participate in the research.
Some call it "strong" AI, others "real" AI, "true" AI or artificial "general" intelligence (AGI)… whatever the term (and important nuances), there are few questions of greater importance than whether we are collectively in the process of developing generalized AI that can truly think like a human -- possibly even at a superhuman intelligence level, with unpredictable, uncontrollable consequences. This has been a recurring theme of science fiction for many decades, but given the dramatic progress of AI over the last few years, the debate has been flaring anew with particular intensity, with an increasingly vocal stream of media and conversations warning us that AGI (of the nefarious kind) is coming, and much sooner than we'd think. Latest example: the new documentary Do you trust this computer?, which streamed last weekend for free courtesy of Elon Musk, and features a number of respected AI experts from both academia and industry. The documentary paints an alarming picture of artificial intelligence, a "new life form" on planet earth that is about to "wrap its tentacles" around us. There is also an accelerating flow of stories pointing to an ever scarier aspects of AI, with reports of alternate reality creation (fake celebrity face generator and deepfakes, with full video generation and speech synthesis being likely in the near future), the ever-so-spooky Boston Dynamics videos (latest one: robots cooperating to open a door) and reports about Google's AI getting "highly aggressive" However, as an investor who spends a lot of time in the "trenches" of AI, I have been experiencing a fair amount of cognitive dissonance on this topic.