Goto

Collaborating Authors

Unity and Google Cloud Platform launch challenge to push limits of game AI

#artificialintelligence

Unity Technologies has teamed up with Google Cloud Platform to create the Obstacle Tower Challenge, which will test the limits of artificial intelligence in games. In the first-of-its-kind contest, Google will offer a prize of cash, travel vouchers, and Google Cloud Platform credits, valued at more than $100,000. Unity, the maker of the Unity game engine, is creating the contest to test the capabilities of intelligent agents and accelerate the research and development of AI. (Unity recently got in a spat with Improbable over a licensing dispute.) The Obstacle Tower Challenge will be a new benchmark aimed at testing the vision, control, planning, and generalization abilities of AI systems -- capabilities that no other benchmark has tested together before. Above: The Obstacle Tower Challenge offers $100,000 in prizes.


Unity developed a video game designed to test AI players

#artificialintelligence

Unity, a leading maker of game development tools, announced today that it's created a new, unprecedented type of video game: one designed not to be played by humans, but by artificial intelligence. The game is called Obstacle Tower, and it's a piece of software created to judge the level of sophistication of an AI agent by measuring how efficiently it can maneuver up to 100 levels that change and scale in difficulty in unpredictable ways. Each level is procedurally generated, so it changes every time the AI attempts it. With Obstacle Tower, and a $100,000 pool of prizes set aside for participants to claim as part of a contest, Unity hopes it can provide AI researchers with a new benchmarking tool to evaluate self-learning software. "We wanted to give the researchers something to really work with that would to an extreme degree challenge the abilities of the AI systems that are currently in research and development around the world," Danny Lange, Unity's vice president of AI and machine learning, told The Verge.


Unity Obstacle Tower Challenge names winners after 6-month AI contest

#artificialintelligence

After six months of competition, Unity Technologies has named the top winners for the Obstacle Tower Challenge, a contest dedicated to making artificial intelligence for games. The winners are Alex Nichols, the Compscience.org And Unity, the maker of the Unity3D game engine, has also announced it has open-sourced Obstacle Tower for the research community to extend for their own needs. The challenge started in February as a way to help foster research in the AI community by providing a challenging new benchmark built in Unity. The benchmark, called Obstacle Tower, was developed to be difficult for current machine learning algorithms to solve.


Announcing the Obstacle Tower Challenge winners and open source release – Unity Blog

#artificialintelligence

After six months of competition (and a few last-minute submissions), we are happy to announce the conclusion and winners of the Obstacle Tower Challenge. We want to thank all of the participants for both rounds and congratulate Alex Nichol, the Compscience.org We are also excited to share that we have open-sourced Obstacle Tower for the research community to extend for their own needs. We started this challenge in February as a way to help foster research in the AI community, by providing a challenging new benchmark of agent performance built in Unity, which we called Obstacle Tower. The Obstacle Tower was developed to be difficult for current machine learning algorithms to solve, and push the boundaries of what was possible in the field by focusing on procedural generation.


Obstacle Tower: A Generalization Challenge in Vision, Control, and Planning

arXiv.org Artificial Intelligence

The rapid pace of research in Deep Reinforcement Learning has been driven by the presence of fast and challenging simulation environments. These environments often take the form of games; with tasks ranging from simple board games, to classic home console games, to modern strategy games. We propose a new benchmark called Obstacle Tower: a high visual fidelity, 3D, 3rd person, procedurally generated game environment. An agent in the Obstacle Tower must learn to solve both low-level control and high-level planning problems in tandem while learning from pixels and a sparse reward signal. Unlike other similar benchmarks such as the ALE, evaluation of agent performance in Obstacle Tower is based on an agent's ability to perform well on unseen instances of the environment. In this paper we outline the environment and provide a set of initial baseline results produced by current state-of-the-art Deep RL methods as well as human players. In all cases these algorithms fail to produce agents capable of performing anywhere near human level on a set of evaluations designed to test both memorization and generalization ability. As such, we believe that the Obstacle Tower has the potential to serve as a helpful Deep RL benchmark now and into the future.