AlphaGo Wins Final Game In Match Against Champion Go Player

IEEE Spectrum Robotics

AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.


AlphaGo, Deep Learning, and the Future of the Human Microscopist

#artificialintelligence

In March of last year, Google's (Menlo Park, California) artificial intelligence (AI) computer program AlphaGo beat the best Go player in the world, 18-time champion Lee Se-dol, in a tournament, winning 4 of 5 games.1 At first glance this news would seem of little interest to a pathologist, or to anyone else for that matter. After all, many will remember that IBM's (Armonk, New York) computer program Deep Blue beat Garry Kasparov--at the time the greatest chess player in the world--and that was 19 years ago. The rules of the several-thousand-year-old game of Go are extremely simple. The board consists of 19 horizontal and 19 vertical black lines.


Will the machines take over our jobs? Ipsos MORI Almanac

#artificialintelligence

People have always had a fascination with robots – from Da Vinci's automatons in the 15th century right the way through to Channel 4's biggest drama hit in 20 years, Humans, which is back for a second season. The power and capability of robotics and computing is increasing at pace – Google Deepmind's AlphaGo programme finally beat Lee Sedol at the game'Go' earlier this year, something programmes have aimed for since IBM's Deep Blue beat chess champion Garry Kasparov in 1995. Our fascination at this progress is certainly tinged with fear. While it may make for excellent TV, we are not comfortable with the idea of machines coming for our jobs, our partners and world domination. The robot apocalypse is still (hopefully) a while off, so rather than hiding in a bunker and waiting for the end, we should instead look to the world of chess as an example of how to make the most of technology.


To drive AI forward, teach computers to play old-school text adventure games

#artificialintelligence

Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.


One of the world's most popular computer games will soon be open to many sophisticated AI players

#artificialintelligence

Teaching computers to play the board game Go is impressive, but if we really want to push the limits of machine intelligence, perhaps they'll need to learn to rush a Zerg or set a trap for a horde of invading Protoss ships. StarCraft, a hugely popular space-fiction-themed strategy computer game, will soon be accessible to advanced AI players. Blizzard Entertainment, the company behind the game, and Google DeepMind, a subsidiary of Alphabet focused on developing general-purpose artificial intelligence, announced the move at a games conference today. Teaching computers to play StarCraft II expertly would be a significant milestone in artificial-intelligence research. Within the game, players must build bases, mine resources, and attack their opponents' outposts.