If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
AlphaGo wasn't the best Go player on the planet for very long. A new version of the masterful AI program has emerged, and it's a monster. In a head-to-head matchup, AlphaGo Zero defeated the original program by 100 games to none. What's really cool is how AlphaGo Zero did it. Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero, also developed by the Alphabet subsidiary DeepMind, started with nothing but a blank board and the rules of the game.
That's the question at the heart of the documentary AlphaGo, about an AI program designed to play the ancient Chinese board game Go. Fan and Lee are forced to answer this question as they're overwhelmed by AlphaGo's uncanny play style. If you don't remember how the matches went, I won't spoil the film for you, but suffice to say that humanity does land at least one blow on the machines, through Lee's so-called "divine move" -- Go terminology for a play that is both unexpected and entirely original. But while this works in the context of Lee's battle with DeepMind's AI, it feels a little limited with regards to the wider challenges posed by artificial intelligence.
For example, when Google DeepMind's AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Deep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI.
Its DeepMind subsidiary has announced plans to expand its operations to Canada in order to accommodate the company's ever-growing research initiatives. "It was a big decision for us to open our first non-UK research lab," Hassabis said. "[W]e've had particularly strong links with the UAlberta for many years: nearly a dozen of its outstanding graduates have joined us at DeepMind, and we've sponsored the machine learning lab to provide additional funding for PhDs over the past few years." Over the past year, DeepMind has consistently made headlines with its impressive AlphaGo AI, which has so far wrecked legendary Go players, learned how to improve itself without human input, and sworn to cure cancer.
AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS). In particular, AlphaGo uses a SL (supervised learning) policy to initialize the learning of an RL (reinforcement learning) policy that gets perfected with self-play, which they then estimate a value function from, which then plugs into MCTS that (somewhat surprisingly) uses the (worse!, but more diverse) SL policy to sample rollouts. That being said, AlphaGo does not by itself use any fundamental algorithmic breakthroughs in how we approach RL problems. While AlphaGo does not introduce fundamental breakthroughs in AI algorithmically, and while it is still an example of narrow AI, AlphaGo does symbolize Alphabet's AI power: in both the quantity/quality of the talent present in the company, the computational resources at their disposal, and the all in focus on AI from the very top.
It's a big accomplishment for Alphabet (NASDAQ:GOOGL) (NASDAQ:GOOG). To match the intuitive skills of human players, programmers taught AlphaGo pattern recognition. The latest version of AlphaGo that beat number-one-ranked Ke Jie is even more impressive than the one that defeated legendary player Lee Sedol last year. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Nvidia.
Last year was huge for advancements in artificial intelligence and machine learning. The idea has been around for decades, but combining it with large (or deep) neural networks provides the power needed to make it work on really complex problems (like the game of Go). Invented by Ian Goodfellow, now a research scientist at OpenAI, generative adversarial networks, or GANs, are systems consisting of one network that generates new data after learning from a training set, and another that tries to discriminate between real and fake data. The hope is that techniques that have produced spectacular progress in voice and image recognition, among other areas, may also help computers parse and generate language more effectively.
Richard Duran, father of Bree'Anna Guzman, 22, discusses his daughter's murder in 2011 at a news conference attended by L.A. Mayor Eric Garcetti and Police Chief Charlie Beck. Geovanni Borjas, 32, was arrested on suspicion of raping and killing Guzman and Michelle Lozano, 17, in Lincoln Heights and dumping their bodies by area freeways six years ago. Richard Duran, father of Bree'Anna Guzman, 22, discusses his daughter's murder in 2011 at a news conference attended by L.A. Mayor Eric Garcetti and Police Chief Charlie Beck. Geovanni Borjas, 32, was arrested on suspicion of raping and killing Guzman and Michelle Lozano, 17, in Lincoln Heights and dumping their bodies by area freeways six years ago.
The artificial intelligence program, developed by Google-owned DeepMind, is retiring from competitive Go play after sweeping a three-match series against Chinese Go master Ke Jie at the The Future of Go Summit in Wuzhen, China. The team behind AlphaGo plans to use to the AI program to develop "advanced general algorithms" that can help scientists with issues like "finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials." AlphaGo's prowess at beating human Go masters has been held up as an example of how AI can, in some applications, surpass the smarts of mere mortals. The company also released a special set of AlphaGo vs. AlphaGo games -- the AI learned by playing millions of games against itself -- that contain "many new and interesting ideas and strategies" for Go players.
After winning its three-game match against Chinese grandmaster Ke Jie, the world's top Go player, AlphaGo, is retiring. Today, in Wuzhen, China, AlphaGo won its third game against Ke Jie, and much as in the other two, the contest held little drama, even as the machine's peerless play sent the usual ripples across the worldwide Go community. Today, during the press conference following the game, Hassabis and DeepMind announced they will publicly release 50 games AlphaGo played against itself inside the vast data centers that underpin Google's online empire. After the match in China, DeepMind is disbanding the team that worked on the game, freeing top researchers like Silver and Thore Graepel to spend their time working on the rest of AI's future.