This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. Since the inception of artificial intelligence in the 1950s, we've been trying to find ways to measure progress in the field of AI. For many, the golden criteria for AI the Turing Test, an evaluation of whether a computer can exhibit human behavior. But the Turing Test only defines whether AI can fool humans, not compete with them, and it's very hard to say how deep the Test goes. A much better arena to test the extent of AI's intelligence, many scientists believe, are games, domains where contestants can measure and compare their success and clearly determine which one performs better.
As artificial intelligence (AI) research and development continues to strengthen, there have been some incredibly intriguing projects where machines battled man in tasks that were once thought the realm of humans. While not all were 100% successful, AI researchers and technology companies learned a lot about how to continue forward momentum as well as what a future might look like when machines and humans work alongside one another. Here are some of the highlights from when artificial intelligence battled humans. World Champion chess player Garry Kasparov competed against artificial intelligence twice. In the first chess match-up between machine (IBM Deep Blue) and man (Kasparov) in 1996 Kasparov won.
AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world's best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world's best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up. Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events.
DeepMind, the London-based subsidiary of Alphabet, has created a system that can quickly master any game in the class that includes chess, Go, and Shogi, and do so without human guidance. The system, called AlphaZero, began its life last year by beating a DeepMind system that had been specialized just for Go. That earlier system had itself made history by beating one of the world's best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself--in just 3 days. AlphaZero, playing White against Stockfish, began by identifying four candidate moves.
Studies show that lots of Americans are worried that AI is coming for their jobs -- Uber and Lyft drivers, couriers, receptionists, even software engineers. A remarkable exhibition match today suggested that another group that should be worried is ... pro video gamers? In a stunning demonstration of how far AI capabilities have come, AlphaStar -- a new AI system from Google's DeepMind -- competed with pro players in a series of competitive StarCraft games. StarCraft is a complicated strategy game that requires players to consider hundreds of options at any given moment, to make strategic choices with payoffs a long way down the road, and to operate in a fast-changing environment with imperfect information. More than 200,000 games are played every day.