In Go, no successful evaluation function for non-terminal positions has ever been found. Therefore, it is not a problem that will be solved with faster search. It pushes the boundaries of what is possible with new algorithms such as Monte Carlo methods. Work on computer Go started in the 1960's, but it was not until 2016 that the AlphaGo program was able to best the second-highest ranking professional Go player.
AI (artificial intelligence) will not confront human beings but serve as tools at their disopal, as human brain will remain the most powerful, although some say AI machines may be able to talk with people and judge their emotions in 2045 at the earliest, according to Aja Huang, one of the key developers behind AlphaGo, an AI program developed by Google's DeepMind unit. Huang made the comments when delivering a speech at the 2017 Taiwan AI Conference hosted recently by the Institute of Information Science under Academia Sinica and Taiwan Data Science Foundation. Huang recalled that he was invited to join London-based Deep Mind Technologies in late 2012, two years after he won the gold medal at the 15th Computer Olympiad in Kanazawa in 2010. In February 2014, DeepMind was acquired by Google, allowing the AI team to enjoy sufficient advanced hardware resources such as power TPU (tensor processing unit) and enabling them to work out the world's most powerful AI program AlphaGo, which has stunned the world by beating global top Go players. In March, 2016, AlphaGo beat Lee Sedol, a South Korean professional Go player in a five-game match, marking the first time a computer Go program has beaten a 9-dan professional without handicaps.
Yao Xin is Founder of PPLIVE and an alumnus of the 3rd CEIBS Entrepreneurial Leadership Camp. "Why is there so much discussion about artificial intelligence these days? I think it's likely because of last year's Man vs Machine battle between world Go champion Lee Sedol and Google DeepMind's artificial intelligence programme AlphaGo. But this wasn't the first Man vs Machine battle. In 1996 Chess Grandmaster Garry Kasparov won four out of a series of six chess matches played against the IBM supercomputer Deep Blue.
Earlier this year Google revealed AlphaGo Zero, a machine-learning system that in a short space of time was able to become a world master at the notoriously complex game of Go. AlphaGo Zero played "completely random" games against itself, and then learnt from the results. In just three days it was able to defeat by 100 games to 0 the version of AlphaGo that defeated the Go world champion Lee Se-dol in March 2016, a victory hailed as a milestone for AI development. After 21 days of playing itself it had gone even further, besting AlphaGo Master -- an online version of AlphaGo that won more than 60 straight games against top Go players, and within 40 days was able to beat all other versions of AlphaGo. At the time, DeepMind lead researcher David Silver said that achieving this level of performance in a domain as complicated as Go "should mean that we can now start to tackle some of the most challenging and impactful problems for humanity".
Alphabet's DeepMind has been making incredible strides in the field of artificial intelligence (AI). Their AI can create pictures based on sentences, play StarCraft, and explore strange environments. It has also developed memory and is imagining solutions to problems. AlphaGo, an AI, was created by DeepMind in order to conquer the oldest game in the world: Go; an incredibly popular game known for being even more complex than chess. What better game to test an AI on?
Google's DeepMind Artificial Intelligence AlphaGo Zero recently attained an important milestone--the Artificial Intelligence (AI) taught itself how to play the strategy game Go without any human interaction and was able to beat the world's best Go players. The ability to reach this level of performance with human input is a significant step forward in the maturation of AI. Over the past several years, AI has made significant progress in a wide variety of areas such as image and speech recognition, drug discovery, and algorithmic trading. In most of these cases, the AI relies on vast existing data sets and some degree of human engagement. A long-standing ambition of AI researchers has been to create algorithms that do not rely on already existing data sets nor the need for human input.
Siby Abraham is a computer scientist specialising in artificial intelligence. He is an associate professor and head of the department of mathematics and statistics at Guru Nanak Khalsa College, Mumbai. How many years does it take for a child that does not know anything about English to master it at a Shakespearean level? Assume that there is no one to teach the child, and that the child only knows about the fundamentals of English grammar to begin with. Suppose also that there is no book, no help and no support (human or non-human) at all times.
Recently Google DeepMind program AlphaGo Zero achieved superhuman level without any help - entirely by self-play! Here is the Nature paper explaining technical details (also PDF version: Mastering the Game of Go without Human Knowledge) One of the main reasons for success was the use of a novel form of Reinforcement learning in which AlphaGo learned by playing itself. The system starts with a neural net that does not know anything about Go. It plays millions of games against itself and tuned the neural network to predict next move and the eventual winner of the games. The updated neural network was merged with the Monte Carlo Tree Search algorithm to create a new and stronger version of AlphaGo Zero, and the process resumed.
Recently Google DeepMind announced AlphaGo Zero -- an extraordinary achievement that has shown how it is possible to train an agent to a superhuman level in the highly complex and challenging domain of Go, 'tabula rasa' -- that is, from a blank slate, with no human expert play used as training data. It thrashed the previous reincarnation 100–0, using only 4TPUs instead of 48TPUs and a single neural network instead of two. Want to quickly learn how it works?
There's been way too much fear-mongering news articles around the latest version of DeepMind's AlphaGo. Let's set the record straight, AlphaGo is an incredible technology and it's not terrifying at all. I'll go over the technical details of how AlphaGo really works; a mixture of deep learning and reinforcement learning. That's what keeps me going.