DeepMind's Losses and the Future of Artificial Intelligence

#artificialintelligence

Alphabet's DeepMind lost $572 million last year. DeepMind, likely the world's largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months. Does this mean that AI is falling apart? Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.


DeepMind's Losses and the Future of Artificial Intelligence

#artificialintelligence

Alphabet's DeepMind lost $572 million last year. DeepMind, likely the world's largest research-focused artificial intelligence operation, is losing a lot of money fast, more than $1 billion in the past three years. DeepMind also has more than $1 billion in debt due in the next 12 months. Does this mean that AI is falling apart? Gary Marcus is founder and CEO of Robust.AI and a professor of psychology and neural science at NYU.


This is how Google's DeepMind crushed puny humans at StarCraft

#artificialintelligence

DeepMind has ambitions to solve some of the world's most complex problems using artificial intelligence. But first, it needs to get really good at StarCraft. After months of training, the Alphabet-owned AI firm's AlphaStar program is now capable of playing a full game of StarCraft II against a professional human player – and winning. It might sound frivolous, but mastering a game as complex as StarCraft is a major technological leap for DeepMind's AI brains. The company showed off AlphaStar in a livestream where the five agents created by the program were initially pitted against professional player Dario "TLO" Wünsch in a pre-recorded five-game series.


DeepMind - Wikipedia

#artificialintelligence

DeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc.. The company is based in London, but has research centres in California, Canada[4], and France[5]. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans,[6] as well as a Neural Turing machine,[7] or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[8][9] The company made headlines in 2016 after its AlphaGo program beat a human professional Go player for the first time in October 2015[10] and again when AlphaGo beat Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.[11] A more generic program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few hours of play against itself using reinforcement learning.[12]


What AI needs to learn to master alien warfare

#artificialintelligence

To learn how humans and AI systems can best live together, we may need to kill a whole lot of Zerg. DeepMind, the AI-focused unit of Alphabet, and the games company Blizzard Entertainment are releasing a set of tools that will let will programmers unleash all sorts of AI algorithms inside the space-themed game StarCraft. The game is more challenging than most of those tackled by AI programs to date. Not only is StarCraft extremely complex, it also requires planning far ahead and trying to second-guess what your opponent is up to. This means developing AI programs capable of matching humans ought to help researchers explore new facets of humanlike intelligence with machines.