Collaborating Authors

Google AI Technology DeepMind Plays Soccer With An Ant: Sounds Dumb, But Here's Why It Is Not


Google DeepMind artificial intelligence (AI) technology can play soccer with an ant. The AI technology may be implemented to real products. The DeepMind AI technology is very smart, and earlier this year, DeepMind's AlphaGo system was applauded worldwide for defeating Lee Sedol, who is the strongest human Go player. Lee Sedol has won 18 world titles, but the Go player lost 4 to 1 against the Google AI. The company says the game was watched by about 200 million people.

Blizzard and DeepMind turn StarCraft II into an AI research lab


Starcraft II has been a target for Alphabet's DeepMind AI research for a while now – the UK AI company took on Blizzard's sci-fi strategy game starting last year, and announced plans to create an open AI research environment based on the game to make it possible for others to contribute to the effort of creating a virtual agent who can best the top human StarCraft players in the world. Now, DeepMind and Blizzard are opening the doors to that environment, with new tools including a machine learning API, a large game replay dataset, an open source DeepMind toolset and more. The new release of the StarCraft II API on the Blizzard side includes a Linux package made to be able to run in the cloud, as well as support for Windows and Mac. It also has support for offline AI vs. AI matches, and those anonymized game replays from actual human players for training up agents, which is starting out at 65,000 complete matches, and will grow to over 500,000 over the course of the next few weeks. StarCraft II is such a useful environment for AI research basically because of how complex and varied the games can be, with multiple open routes to victory for each individual match.

OpenAI thrashes DeepMind using an AI from the 1980's


Artificial intelligence (AI) researchers have a long history of going back in time to explore old ideas, and now researchers at OpenAI, which is backed by Elon Musk, have revisited "Neuroevolution," a field that has been around since the 1980s, and they've achieved state of the art results. The group, which was led by OpenAI's research director Ilya Sutskever, explored the use of a set of algorithms called "Evolution strategies," which are aimed at solving "optimisation" problems. Optimisation problems are just like they sound, think of something that needs optimising, such as your route to work, a flight plan, or even a healthcare treatment and optimise it. On an abstract level, the technique the team used works by letting successful algorithms to pass their characteristics on to future generations – in short, each successive generation gets better and better at whatever tasks they've been assigned. However, coming back into the present day, the researchers took these algorithms and reworked them so they'd work better with today's deep neural networks and run better on large scale distributed computing systems.



This repository contains the trained model and dataset used for Unsupervised Adversarial Training (UAT) from the paper Are Labels Required for Improving Adversarial Robustness? Our model is available via TF-Hub. The preferred method of running this script is through, which will set up a virtual environment, install the dependendencies, and run the evaluation script, which will print the adversarial accuracy of the model. Note this file is very large, and requires 227 GB of disc space. This is not an official Google product.

DeepMind funds new post at Oxford University – the DeepMind Professorship of Artificial Intelligence

Oxford Comp Sci

Demis Hassabis, co-founder and CEO, DeepMind, says: 'I'm delighted to expand our support of AI research at Oxford with the DeepMind Professorship of Artificial Intelligence. I look forward to seeing who the University appoints and where they decide to focus their research with the support of Oxford's world-class AI research community.'