How TensorFlow makes Candy Crush virtual players
Simulating a human gamer has enabled Candy Crush developer King to speed up its release cycles. The evolution of DeepMind's AlphaGo deep learning algorithm was the inspiration behind mobile games developer King's work to build a simulation of a games player using Google's TensorFlow. AlphaGo beat Go world champion Lee Sedol in 2016. To simulate the ancient game of Go, AlphaGo needed to play the game over and over again, applying a technique called a Monte Carlo search, which uses a deep neural network to "learn" what is the best play move to make. At the time, artificial intelligence (AI) researcher Demis Hassabis, co-founder of DeepMinds, which Google acquired in 2014, described how open source libraries for numerical computation using data flow graphs, such as TensorFlow, allow researchers to efficiently deploy the computation needed for deep learning algorithms across multiple CPUs or GPUs. According to GitHub's Octoverse 2018 report, TensorFlow was by far the most popular open source project in 2018.
Feb-3-2019, 07:38:31 GMT