Goto

Collaborating Authors

AlphaGo, Deep Learning, and the Future of the Human Microscopist

#artificialintelligence

In March of last year, Google's (Menlo Park, California) artificial intelligence (AI) computer program AlphaGo beat the best Go player in the world, 18-time champion Lee Se-dol, in a tournament, winning 4 of 5 games.1 At first glance this news would seem of little interest to a pathologist, or to anyone else for that matter. After all, many will remember that IBM's (Armonk, New York) computer program Deep Blue beat Garry Kasparov--at the time the greatest chess player in the world--and that was 19 years ago. The rules of the several-thousand-year-old game of Go are extremely simple. The board consists of 19 horizontal and 19 vertical black lines.


Temporal Difference Learning of Position Evaluation in the Game of Go

Neural Information Processing Systems

Computational Neurobiology Laboratory The Salk Institute for Biological Studies San Diego, CA 92186-5800 Abstract The game of Go has a high branching factor that defeats the tree search approach used in computer chess, and long-range spatiotemporal interactionsthat make position evaluation extremely difficult. Development of conventional Go programs is hampered by their knowledge-intensive nature. We demonstrate a viable alternative by training networks to evaluate Go positions via temporal difference(TD) learning. Our approach is based on network architectures that reflect the spatial organization of both input and reinforcement signals on the Go board, and training protocols that provide exposure to competent (though unlabelled) play. These techniques yield far better performance than undifferentiated networks trained by selfplay alone.A network with less than 500 weights learned within 3,000 games of 9x9 Go a position evaluation function that enables a primitive one-ply search to defeat a commercial Go program at a low playing level. 1 INTRODUCTION Go was developed three to four millenia ago in China; it is the oldest and one of the most popular board games in the world.


DeepMind - Wikipedia

#artificialintelligence

DeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc.. The company is based in London, but has research centres in California, Canada[4], and France[5]. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans,[6] as well as a Neural Turing machine,[7] or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[8][9] The company made headlines in 2016 after its AlphaGo program beat a human professional Go player for the first time in October 2015[10] and again when AlphaGo beat Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.[11] A more generic program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few hours of play against itself using reinforcement learning.[12]


Next AI challenge: Computers take on StarCraft

#artificialintelligence

From Chess to Go, board games have been the first frontier of artificial intelligence research for decades. Now, the team at Google's DeepMind wants to take AI to a whole new level in order to beat the online strategy game, StarCraft II. DeepMind announced its decision to partner with StarCraft's creator, Blizzard, at a conference in California. The two groups say that they look forward to programming a computer to react to strategic problems in real time. "DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how," wrote DeepMind in a blog post.