AI as management assistant: The artificial intelligence program AlphaGo got a lot of attention for beating 18-time Go world champion Lee Sedol four out of five games last week. The significance of this achievement is rooted in the extraordinary number of possible moves in Go: 2.08168199382 … 10170, reportedly more than the number of atoms in the universe. That's too many possibilities for brute computing force to handle (which is how IBM's Deep Blue beat chess master Garry Kasparov 20 years ago). Yet AlphaGo, created by Google DeepMind, formerly British AI company DeepMind Technologies, mastered the 2,500-year-old board game on its own in a matter of months. "It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times," reported science correspondent Geoff Brumfiel at NPR. Go bragging rights are nice for Google, but what does AlphaGo's victory mean for management?
Facebook continues its efforts to create artificial intelligence capable of outclassing all humans at the ancient Chinese strategy board game Go. The social media company recently published a research paper showcasing the progress it made with the DarkForest bots, which use a synergy of methods to be the best Go players available. Yuandong Tian and Yan Zhu, AI researchers at Facebook, explain how the computer program behaves in the abstract of the paper. "Against human players, [darkfores2 achieves] a stable 3d level on KGS Go Server as a ranked bot," the duo points out [pdf]. This is a visible improvement over the predicted 4k-5k ranks for DCNN that Clark & Storkey (2015) reported after studying matches against other machine players.
Google's artificial intelligence (AI) division has achieved a landmark victory over the grandmaster of Chinese board game Go. The AlphaGo system beat three-time European Go champion Fan Hui 5:0 in a competition held at the company's London headquarters in October. AlphaGo was built by Google's AI division created from the 400m acquisition of London-based AI company DeepMind in 2014. The victory is notable because the complexity of Go makes it difficult for computer systems to calculate all the possible moves. Chess, for example, offers 400 possible moves after the first two, but Go offers 130,000, making it very difficult for an AI to perform those calculations to match a human opponent.
It is no mystery why poker is such a popular pastime: the dynamic card game produces drama in spades as players are locked in a complicated tango of acting and reacting that becomes increasingly tense with each escalating bet. The same elements that make poker so entertaining have also created a complex problem for artificial intelligence (AI). A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold'em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly.
An ancient Chinese board game that dates back nearly 3,000 years, Go is played on a 19-by-19 square grid, with each player trying capture the opponent's territory. It was thought it would take at least another 10 years before a machine could beat a human in Go. That's like an aircraft that can fly faster and faster without the help of an engineer. How can that be possible? When machine learning first took off, it was used to predict how we click, buy, lie, or die.