DeepMind


Here's Why Google's Assistant Sounds More Realistic Than Ever Before

#artificialintelligence

If you're playing around with Google's new Home Max or Mini smart speakers, or if you're just using an Android phone such as the new Pixel 2, you may be familiar with the Google Assistant virtual helper. And if you've done so in the last couple days, you may have noticed that the virtual assistant's voice is sounding more realistic than before. That's because Alphabet's Google has started using a cutting-edge piece of technology called WaveNet--developed by its DeepMind "artificial intelligence" division--in Google Assistant. WaveNet represents a different approach that uses recordings of real speech to train a neural network--a computer model that simulates a brain of sorts.


Facebook heads to Canada in search of the next big AI advance

@machinelearnbot

Several leading figures in AI, including LeCun, have studied or taught at Canadian universities. Reinforcement learning builds on deep learning to let machines learn through experimentation. Michael Bowling, a U.S.-born computer scientist who leads a lab at the University of Alberta that has produced cutting-edge poker-playing machines, says the new Facebook lab simply shows that Canada already leads the rest of the world in AI. Indeed, after seeing AI researchers snapped up by big U.S. companies in recent years, Canada may well hope that the environment fostered by new labs, including the one in Montreal, will eventually produce companies that rival the likes of Facebook.


A Beginner's Guide to AI/ML – Machine Learning for Humans – Medium

@machinelearnbot

After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Meanwhile, we're continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or… reprogramming itself.


A Beginner's Guide to AI/ML – Machine Learning for Humans – Medium

@machinelearnbot

After a couple of AI winters and periods of false hope over the past four decades, rapid advances in data storage and computer processing power have dramatically changed the game in recent years. Artificial intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Meanwhile, we're continuing to make foundational advances towards human-level artificial general intelligence (AGI), also known as strong AI. The definition of an AGI is an artificial intelligence that can successfully perform any intellectual task that a human being can, including learning, planning and decision-making under uncertainty, communicating in natural language, making jokes, manipulating people, trading stocks, or… reprogramming itself.


From not working to neural networking

#artificialintelligence

Training a neural network involves adjusting the neurons' weights so that a given input produces the desired output (see diagram). Another technique, unsupervised learning, involves training a network by exposing it to a huge number of examples, but without telling it what to look for. In a famous example, when working at Google in 2011, Mr Ng led a project called Google Brain in which a giant unsupervised learning system was asked to look for common patterns in thousands of unlabelled YouTube videos. In essence, training involves adjusting the network's weights to search for a strategy that consistently generates higher rewards.


How video games help improve real-world AI

#artificialintelligence

Take, for example, Artur Filipowicz, an AI researcher at Princeton University who's been trying to develop software for autonomous vehicles. Now, DeepMind can beat just about any top score on any Atari video game. Privately funded organization OpenAI has taken the world of video game-based AI development to new levels, with a piece of software it calls Universe. The future of video games in AI development is rich with potential, and we're just starting to explore its full capabilities.


DeepMind Shows AI Has Trouble Seeing Homer Simpson's Actions

#artificialintelligence

Those findings from DeepMind, the pioneering London-based AI lab, also suggest the motive behind why DeepMind has created a huge new dataset of YouTube clips to help train AI on identifying human actions in videos that go well beyond "Mmm, doughnuts" or "Doh!" To help improve AI's capability to recognize human actions in motion, DeepMind has unveiled its Kinetics dataset consisting of 300,000 video clips and 400 human action classes. Past cases have shown how imbalanced training datasets can lead to deep learning algorithms performing worse at recognizing the faces of certain ethnic groups. This means that even the Kinetics action classes featuring mostly male participants--such as "playing poker" or "hammer throw"--did not seem to bias AI to the point where the deep learning algorithms had trouble recognizing female participants performing the same actions.


Human vs Machine: Five epic fights against AI

#artificialintelligence

After beating IBM's Deep Blue computer in a six-game chess match in 1996, Garry Kasparov played a rematch a year later that we called the "Slaughter on 7th Avenue". Catastrophe overtook the best chess mind of his era after Deep Blue played chess like no human. Prior to the game Garry Kasparov told New Scientist that Go's clock was ticking, but the scale of the defeat nevertheless came as a shock, not least to Sedol. The next frontier for AI in games is Starcraft 2, a space-war strategy game played in real time.


AlphaGo, in context – Andrej Karpathy – Medium

#artificialintelligence

AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS). In particular, AlphaGo uses a SL (supervised learning) policy to initialize the learning of an RL (reinforcement learning) policy that gets perfected with self-play, which they then estimate a value function from, which then plugs into MCTS that (somewhat surprisingly) uses the (worse!, but more diverse) SL policy to sample rollouts. That being said, AlphaGo does not by itself use any fundamental algorithmic breakthroughs in how we approach RL problems. While AlphaGo does not introduce fundamental breakthroughs in AI algorithmically, and while it is still an example of narrow AI, AlphaGo does symbolize Alphabet's AI power: in both the quantity/quality of the talent present in the company, the computational resources at their disposal, and the all in focus on AI from the very top.


AlphaGo's Designers Explore New AI After Winning Big in China

WIRED

After winning its three-game match against Chinese grandmaster Ke Jie, the world's top Go player, AlphaGo, is retiring. Today, in Wuzhen, China, AlphaGo won its third game against Ke Jie, and much as in the other two, the contest held little drama, even as the machine's peerless play sent the usual ripples across the worldwide Go community. Today, during the press conference following the game, Hassabis and DeepMind announced they will publicly release 50 games AlphaGo played against itself inside the vast data centers that underpin Google's online empire. After the match in China, DeepMind is disbanding the team that worked on the game, freeing top researchers like Silver and Thore Graepel to spend their time working on the rest of AI's future.