Collaborating Authors

Google's artificial intelligence team DeepMind sets its sights on StarCraft 2


At BlizzCon on Friday, Google research scientist Oriol Vinyals announced that StarCraft 2 was being opened up to artificial intelligence researchers around the world. The goal: to create better AI opponents for StarCraft 2, and possibly even create AI coaches that could teach humans how to better play the strategy game. Of course, the goal of AI development is not just to match human players but best them, and just as DeepMind's AI program AlphaGo beat human champion Go player Lee Se-dol earlier this year, DeepMind wants its AI to someday take on a human StarCraft 2 champion. There will be more details about DeepMind's interest in StarCraft 2 revealed later during the convention, and we'll update you if we hear anything else.

Dip into DeepMind's 3D playground where AI comes to learn


DeepMind is inviting you down the rabbit hole and into its 3D digital world. Its hope is to encourage artificial intelligence to grow and thrive in an environment that's nearly as rich as the one human intelligence evolved in. DeepMind is an AI moonshot project under the Alphabet umbrella, formed from a London-based startup that Google bought in 2014. Remember when AI beat a human champ at the complex strategy game of Go earlier this year? The company now is giving outside software developers access to DeepMind Lab, a virtual space where AI agents learn to navigate the world, according to a blog post published Saturday.

Google Deepmind AI tries it hand at creating Hearthstone and Magic: The Gathering cards - TechRepublic


Tens of million of people worldwide play Hearthstone, an online collectible card game set in the Warcraft universe, which also encompasses the massively popular MMO World of Warcraft and a major movie. Now Google Deepmind, fresh from creating an AI that triumphed at a game it was thought no computer could master, has been using Hearthstone to test ways a machine learning system could generate natural language - such as English - and formal language - such as computer code. Researchers tasked a system with writing the code that sets the behaviour of cards used in Hearthstone and in another famous collectible card game, Magic: The Gathering (MTG). The Deepmind system -- which implemented a novel neural network architecture -- was first trained using code from open-source versions of Hearthstone, programmed in Python, and Magic: The Gathering, programmed in Java. Humans 2.0: How the robot revolution is going to change how we see, feel, and talk Robots aren't going to replace us, but by working hand in hand with us they will redefine what it means to be human.

Cambridge takes global AI lead as Google DeepMind backs Machine Learning chair Business Weekly Technology News Business news


Cambridge University is launching a DeepMind Chair of Machine Learning, thanks to a benefaction from the world-leading British AI company – Google's DeepMind – whose IP was born within the globally acclaimed seat of learning. The new chair, to be based at Cambridge's Department of Computer Science and Technology, will build on the university's strengths in computer science and engineering and will be a focal point for the wide range of AI-related research taking place across the university. Cambridge researchers are designing systems that are cybersecure, model human reasoning, interact in affective ways with us, uniquely identify us by our face and give insights into our biological makeup. The first DeepMind chair is expected to take up their position in October 2019, following an international search by the department. The chair will have full academic freedom to pursue research in the field of machine learning.

Google's DeepMind Masters Atari Games

AITopics Original Links

A computer that taught itself to play almost 50 video games including Space Invaders and Pong is being hailed as the pinnacle of artificial intelligence. But it is unlikely to spark the Terminator-like Armageddon predicted in recent months by technology entrepreneur Elon Musk (who provided early funding for the project) and physicist Stephen Hawking. Despite mastering more than half the classic Atari 2600 games, the program – deep Q-network (DQN), developed by DeepMind Technologies – struggled with more difficult challenges, such as, well, Pac-Man. "On the face of it, it looks trivial in the sense that these are games from the '80s and you can write solutions to them quite easily," said Dr Demis Hassabis, the vice-president of engineering at DeepMind, a British company acquired by a year ago for a reported £400m (US$650m). Never before has a computer taught itself how to do a range of complex operations, said Dr Hassabis, one of the company's co-founders.