Goto

Collaborating Authors

?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+AtlanticScienceAndTechnology+%28The+Atlantic+-+Technology%29

The Atlantic - Technology

It makes a certain kind of sense that the game's connoisseurs might have wondered if they'd seen glimpses of the occult in those three so-called ghost moves. Unlike something like tic-tac-toe, which is straightforward enough that the optimal strategy is always clear-cut, Go is so complex that new, unfamiliar strategies can feel astonishing, revolutionary, or even uncanny.


New AlphaGo AI learns without help from humans

#artificialintelligence

What's new: AlphaGo's initial iteration was trained on a database of human Go games whereas the newer AlphaGo Zero's artificial neural networks use the current state of the game as input. Through trial and error and feedback in the form of winning, the AI learned how to play.


Google's AlphaGo beats world's best player in latest Go match

New Scientist

DeepMind's artificial intelligence AlphaGo has defeated Ke Jie, the world's number one player, in the first of three games of Go played in Wuzhen, China. The AI won by half a point – the smallest possible margin of victory – in a match lasting 4 hours and 15 minutes. Although the scoreline looks close, AlphaGo was in the lead from relatively early on in the game. Because the AI favours moves that are more likely to guarantee victory, it doesn't usually trounce opponents. In a press conference after the match, Ke said AlphaGo had learned from its recent victories against Go champions.


AI to help, not confront humans, says AlphaGo developer Aja Huang

#artificialintelligence

AI (artificial intelligence) will not confront human beings but serve as tools at their disopal, as human brain will remain the most powerful, although some say AI machines may be able to talk with people and judge their emotions in 2045 at the earliest, according to Aja Huang, one of the key developers behind AlphaGo, an AI program developed by Google's DeepMind unit. Huang made the comments when delivering a speech at the 2017 Taiwan AI Conference hosted recently by the Institute of Information Science under Academia Sinica and Taiwan Data Science Foundation. Huang recalled that he was invited to join London-based Deep Mind Technologies in late 2012, two years after he won the gold medal at the 15th Computer Olympiad in Kanazawa in 2010. In February 2014, DeepMind was acquired by Google, allowing the AI team to enjoy sufficient advanced hardware resources such as power TPU (tensor processing unit) and enabling them to work out the world's most powerful AI program AlphaGo, which has stunned the world by beating global top Go players. In March, 2016, AlphaGo beat Lee Sedol, a South Korean professional Go player in a five-game match, marking the first time a computer Go program has beaten a 9-dan professional without handicaps.


2017 AI/ML Surprises - DZone AI

#artificialintelligence

No surprises here, but the award for the biggest event (not so much surprise) was when Google's AlphaGo stepped up to the plate and taught itself how to master the game of Go, having been given nothing more than the basic rules. Because up until now, machines have needed people to teach them, to feed them data, and to help them learn in a supervised way until they're ready to take things to the next level through consumption of massive datasets. In contrast, AlphaGo Zero was able to gain mastery by playing itself, then updating itself based on what it had learned from the game. Play this over millions and millions of times, and the result was a machine that could beat the previous AlphaGo 90% of the time -- an impressive feat given that AlphaGo was able to beat the 18 times world champion 100-nil. Well, I'm not the paranoid type so I don't think Skynet just lit up, but at the same time, I believe we are fast approaching the singularity and a massive change in the pace of technological advancement and associated societal impacts.