In 2016, Alphabet's DeepMind came out with AlphaGo, an AI which consistently beat the best human Go players. One year later, the subsidiary went on to refine its work, creating AlphaGo Zero. Where its predecessor learned to play Go by observing amateur and professional matches, AlphaGo Zero mastered the ancient game by simply playing against itself. DeepMind then created AlphaZero, which could play Go, chess and shogi with a single algorithm. What tied all those AIs together is that they knew the rules of the games they had to master going into their training.
David Silver is responsible for several eye-catching demonstrations of artificial intelligence in recent years, working on advances that helped revive interest in the field after the last great AI Winter. At DeepMind, a subsidiary of Alphabet, Silver has led the development of techniques that let computers learn for themselves how to solve problems that once seemed intractable. Most famously, this includes AlphaGo, a program revealed in 2017 that taught itself to play the ancient board game Go to a grandmaster level. Go is too subtle and instinctive to be tamed using conventional programming, but AlphaGo learned to play through practice and positive reward--an AI technique known as "reinforcement learning." In 2018, Silver and colleagues developed a more general version of the program, called AlphaZero, capable of learning to play expert chess and shogi as well as Go.
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. Artificial intelligence has recently beaten world champions in Go and poker and made extraordinary progress in domains such as machine translation, object classification, and speech recognition. However, most AI systems are extremely narrowly focused. AlphaGo, the champion Go player, does not know that the game is played by putting stones onto a board; it has no idea what a "stone" or a "board" is, and would need to be retrained from scratch if you presented it with a rectangular board rather than a square grid.
Starfleet's star android, Lt. Commander Data, has been enlisted by his renegade android "brother" Lore to join a rebellion against humankind -- much to the consternation of Jean-Luc Picard, captain of the USS Enterprise. "The reign of biological life-forms is coming to an end," Lore tells Picard. "You, Picard, and those like you, are obsolete." In real life, the era of smart machines has already arrived. They haven't completely taken over the world yet, but they're off to a good start. "Machine learning" -- a sort of concrete subfield within the more nebulous quest for artificial intelligence -- has invaded numerous fields of human endeavor, from medical diagnosis to searching for new subatomic particles.
The UK has been at the cutting edge of artificial intelligence (AI) innovation, from Alan Turing, the pioneering mathematician and computer visionary, who launched the field, to DeepMind's AlphaGo, the first computer program to defeat a professional Go player in 2015. Several pioneering AI companies were founded in the UK, including DeepMind, SwiftKey and Magic Pony, all of which were acquired by US companies – Google, Microsoft and Twitter – for $500 million, $250 million and $150 million, respectively. Over the last few years, the UK government has launched its Office for AI and Centre for Data Ethics and Innovation. But is the UK still an AI leader? In 2019, McKinsey Global Institute placed the UK in the top quartile for "AI readiness".
You may have heard about "DeepMind" in the past, and if you haven't, now you will. To this day, DeepMind has acquired a number of achievements since it was founded, but it is most notable for AlphaGo, an AI program that beat some of the best professional Go players in history including Ke Jie. DeepMind's AlphaFold 2 can now identify a protein's three-dimensional structures from its amino-acid sequence to the width of an atom. To give some context, AlphaFold2 competed with over 100 research groups worldwide in a competition known as the Critical Assessment of Protein Structure Prediction, or CASP. The goal was exactly what AlphaFold 2 achieved, to be able to predict a protein's structure from its amino-acid sequence.
With the success of DeepMind's AlphaGo system defeating the world Go champion, reinforcement learning has achieved significant attention among researchers and developers. Deep reinforcement learning has become one of the most significant techniques in AI that is also being used by the researchers in order to attain artificial general intelligence. Below here is a list of 10 best free resources, in no particular order to learn deep reinforcement learning using TensorFlow. About: This tutorial "Introduction to RL and Deep Q Networks" is provided by the developers at TensorFlow. The topics include an introduction to deep reinforcement learning, the Cartpole Environment, introduction to DQN agent, Q-learning, Deep Q-Learning, DQN on Cartpole in TF-Agents and more.
But then there was the Chinese game of go (pictured), estimated to be 4000 years old, which offers more "degrees of freedom" (possible moves, strategy, and rules) than chess (2 10170). As futurist George Gilder tells us, in Gaming AI, it was a rite of passage for aspiring intellects in Asia: "Go began as a rigorous rite of passage for Chinese gentlemen and diplomats, testing their intellectual skills and strategic prowess. Later, crossing the Sea of Japan, Go enthralled the Shogunate, which brought it into the Japanese Imperial Court and made it a national cult." Then AlphaGo, from Google's DeepMind, appeared on the scene in 2016: As the Chinese American titan Kai-Fu Lee explains in his bestseller AI Super-powers,8 the riveting encounter between man and machine across the Go board had a powerful effect on Asian youth. Though mostly unnoticed in the United States, AlphaGo's 2016 defeat of Lee Sedol was avidly watched by 280 million Chinese, and Sedol's loss was a shattering experience. The Chinese saw DeepMind as an alien system defeating an Asian man in the epitome of an Asian game.
Playing human games such as chess and Go has long been considered to be a major benchmark of human capabilities. Computer programs have become robust chess players and, since the late 1990s, have been able to beat even the best human chess champions; though, for a long time, computers were unable to beat expert Go players -- the game of Go has proven to be especially difficult for computers. However, in 2016, a new program called AlphaGo finally won a victory over a human Go champion, only to be beaten by its subsequent versions (AlphaGo Zero and AlphaZero). AlphaZero proceeded to beat the best computers and humans in chess, shogi and Go, including all its predecessors from the Alpha family . Core to AlphaZero's success is its use of a deep neural network, trained through reinforcement learning, as a powerful heuristic to guide a tree search algorithm (specifically Monte Carlo Tree Search). The recent successes of machine learning are good reason to consider the limitations of learning algorithms and, in a broader sense, the limitations of AI. In the context of a particular competition (or'game'), a natural question to ask is whether an absolute winner AI might exist -- one that, given sufficient resources, will always achieve the best possible outcome.
Paid advertising will change significantly in the future. Do people or digital advertising agencies like us then have any role to play? Will your AlphaGo Zero have its own advertising platforms in the future? Artificial intelligence – its benefits and potential – has been talked about for many years, but it is only in the last year that it has reached such a level that it can be harnessed as the Kalevala's mare to make riches. That opportunity to make a difference is crucial in taking the development of artificial intelligence forward, and Google, for example, is consciously investing in its development work, as it facilitates e.g.