game of go


How Machine Learning, AI & Data Visualization are Redefining Customer Experience

@machinelearnbot

The words Artificial Intelligence (AI), machine learning (ML) and Data visualization are very popular buzzwords right now. Both Artificial Intelligence and Machine Learning have taken up a tremendous role in characterizing the business realm and have particularly impacted the way we characterize the customer experience. In any case, with regards to understanding Artificial Intelligence and Machine Learning, a few people tend to mix them up or they trust that one can substitute the other, which isn't precisely the case. This term is utilized to characterize technologies that have a human-like intelligence, which can do intellectual tasks that were typically reserved to human minds. The term machine learning is intended to portray the procedure machines experience with the end goal to get information and fundamentally learn for themselves.


The world's smartest game-playing AI--DeepMind's AlphaGo--just got way smarter

#artificialintelligence

AlphaGo wasn't the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it's a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none. What's really cool is how AlphaGo Zero did it. Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero, also developed by the Alphabet subsidiary DeepMind, started with nothing but a blank board and the rules of the game.


Google's DeepMind achieves machine learning breakthroughs at a terrifying pace

#artificialintelligence

It's time to add "AI research" to the list of things that machines can do better than humans. Google's Alpha Go, the computer that beat the world's greatest human go player, just lost to a version of itself that's never had a single human lesson. Google is making progress in the field of machine learning at a startling rate. The company's AutoML recently dropped jaws with its ability to self-replicate, and DeepMind is now able to teach itself better than the humans who created it can. DeepMind is the machine behind both versions of Alpha Go, with the latest evolution dubbed Alpha Go Zero -- which sounds like the prequel to a manga.


?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+AtlanticScienceAndTechnology+%28The+Atlantic+-+Technology%29

The Atlantic

It makes a certain kind of sense that the game's connoisseurs might have wondered if they'd seen glimpses of the occult in those three so-called ghost moves. Unlike something like tic-tac-toe, which is straightforward enough that the optimal strategy is always clear-cut, Go is so complex that new, unfamiliar strategies can feel astonishing, revolutionary, or even uncanny. Unfortunately for ghosts, now it's computers that are revealing these goosebump-inducing moves. As many will remember, AlphaGo--a program that used machine learning to master Go--decimated world champion Ke Jie earlier this year. Then, the program's creators at Google's DeepMind let the program continue to train by playing millions of games against itself.


Artificial Intelligence Learns to Learn Entirely on Its Own - Abstractions on Nautilus

#artificialintelligence

A mere 19 months after dethroning the world's top human Go player, the computer program AlphaGo has smashed an even more momentous barrier: It can now achieve unprecedented levels of mastery purely by teaching itself. Starting with zero knowledge of Go strategy and no training by humans, the new iteration of the program, called AlphaGo Zero, needed just three days to invent advanced strategies undiscovered by human players in the multi-millennia history of the game. By freeing artificial intelligence from a dependence on human knowledge, the breakthrough removes a primary limit on how smart machines can become. Earlier versions of AlphaGo were taught to play the game using two methods. In the first, called supervised learning, researchers fed the program 100,000 top amateur Go games and taught it to imitate what it saw.


Google's artificial intelligence computer 'no longer constrained by limits of human knowledge'

FOX News

The computer that stunned humanity by beating the best mortal players at a strategy board game requiring "intuition" has become even smarter, its creators claim. Even more startling, the updated version of AlphaGo is entirely self-taught -- a major step towards the rise of machines that achieve superhuman abilities "with no human input", they reported in the science journal Nature. Dubbed AlphaGo Zero, the Artificial Intelligence (AI) system learnt by itself, within days, to master the ancient Chinese board game known as "Go" -- said to be the most complex two-person challenge ever invented. It came up with its own, novel moves to eclipse all the Go acumen humans have acquired over thousands of years. After just three days of self-training it was put to the ultimate test against AlphaGo, its forerunner which previously dethroned the top human champs.


Google's Deepmind AI unit releases new version of AlphaGo that learns on its own

@machinelearnbot

Deepmind, the artificial intelligence research organization owned by Google, announced some stunning results Wednesday from research into the next generation of its AlphaGo system: the machines are getting smarter. AlphaGo Zero, the new version of the AlphaGo system that defeated the world's best Go players in competitions over the past few years, was able to teach itself how to play the ancient board game as well as its predecessors in a matter of days with no other input than the basic rules of the game, Deepmind said in a blog post Wednesday. Previous versions of AlphaGo built to compete against human masters of the game required hours and hours of training on Go gameplay, but AlphaGo Zero was able to teach itself to play using a technique called reinforcement learning. Reinforcement learning involves training a system to figure out the best reward outcome from a series of actions, unlike supervised learning, in which the system is taught which outcomes are desired and trained over and over to recognize the factors that lead to those outcomes. Deepmind set up a neural network that played games of Go against itself until it learned how to formulate a winning strategy for a game in which capturing as many stones as possible can be satisfying in early stages, but can lead to big problems as the game plays out.


Google's AlphaGo Zero taught itself to become the greatest Go player in history

Mashable

Google's DeepMind lab has built an artificially intelligent program that taught itself to become one of the world's most dominant Go players. Google says the program, AlphaGo Zero, endowed itself with "superhuman abilities," learning strategies previously unknown to humans. AlphaGo Zero started out with no clue how to win the game Go -- a 2,500-year old Chinese game in which two players use black and white tiles to capture more territory than their opponents. It took AlphaGo Zero just three days to beat an earlier AI program (AlphaGo Lee), which had resoundingly beaten world champion Lee Sedol in 2016. After 21 days of playing, AlphaGo Zero defeated AlphaGo Master, an intelligent program known for beating 60 top pros online and another world champion player in 2017.


'It's able to create knowledge itself': Google unveils AI that learns on its own

#artificialintelligence

Google's artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo – an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days. Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0. The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems. At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.


The latest AI can work things out without being taught

#artificialintelligence

IN 2016 Lee Sedol, one of the world's best players of Go, lost a match in Seoul to a computer program called AlphaGo by four games to one. It was a big event, both in the history of Go and in the history of artificial intelligence (AI). Go occupies roughly the same place in the culture of China, Korea and Japan as chess does in the West. After its victory over Mr Lee, AlphaGo beat dozens of renowned human players in a series of anonymous games played online, before re-emerging in May to face Ke Jie, the game's best player, in Wuzhen, China. Mr Ke fared no better than Mr Lee, losing to the computer 3-0.