Real-time Learning in the NERO Video Game

AAAI Conferences

If game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. The real-time NeuroEvolution of Augmenting Topologies (rtNEAT) method, which can evolve increasingly complex artificial neural networks in real time as a game is being played, will be presented. The rtNEAT method makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. In order to demonstrate this concept, the NeuroEvolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. The live demo will show how agents in NERO adapt in real time as they interact with the player. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games.


Churchill

AAAI Conferences

Real-Time Strategy games have become a popular test-bed for modern AI system due to their real-time computational constraints, complex multi-unit control problems, and imperfect information. One of the most important aspects of any RTS AI system is the efficient control of units in complex combat scenarios, also known as micromanagement. Recently, a model-based heuristic search technique called Portfolio Greedy Search (PGS) has shown promisingpaper we present the first integration of PGS into the StarCraft game engine, and compare its performance to the current state-of-the-art deep reinforcement learning method in several benchmark combat scenarios. We then perform theperformance for providing real-time decision making in RTS combat scenarios, but has so far only been tested in SparCraft: an RTS combat simulator. In this same experiments within the SparCraft simulator in order to investigate any differences between PGS performance in the simulator and in the actual game. Lastly, we investigate how varying parameters of the SparCraft simulator affect the performance of PGS in the StarCraft game engine. We demonstrate that the performance of PGS relies heavily on the accuracy of the underlying model, outperforming other techniques only for scenarios where the SparCraft simulation model more accurately matches the StarCraft game engine.


Nvidia GeForce RTX: Every game that supports real-time ray tracing and Deep Learning Super Sampling

PCWorld

Nvidia revealed the boundary-pushing GeForce RTX 20-series on Monday, unleashing GeForce RTX 2070, RTX 2080, and RTX 2080 Ti graphics cards brimming with fancy new tech that promises to support fancy new gaming capabilities. Foremost among those feats is real-time ray tracing, the ultra-difficult realistic lighting technology that gives Nvidia's new cards their "RTX" moniker. The RTX cards also support Deep Learning Super-Sampling (DLSS), a fresh Nvidia super-sampling method that puts the AI tensors cores embedded within the GPUs to work. Now, we know which PC games will support them--a crucial step, since all the luxurious tech in the world means nothing if games don't actually tap into it. Both real-time ray tracing and DLSS will debut with a solid backing, as made clear by Nvidia's games partner announcement.


Achieving Goals Quickly Using Real-time Search: Experimental Results in Video Games

Journal of Artificial Intelligence Research

In real-time domains such as video games, planning happens concurrently with execution and the planning algorithm has a strictly bounded amount of time before it must return the next action for the agent to execute. We explore the use of real-time heuristic search in two benchmark domains inspired by video games. Unlike classic benchmarks such as grid pathfinding and the sliding tile puzzle, these new domains feature exogenous change and directed state space graphs. We consider the setting in which planning and acting are concurrent and we use the natural objective of minimizing goal achievement time. Using both the classic benchmarks and the new domains, we investigate several enhancements to a leading real-time search algorithm, LSS-LRTA*. We show experimentally that 1) it is better to plan after each action or to use a dynamically sized lookahead, 2) A*-based lookahead can cause undesirable actions to be selected, and 3) on-line de-biasing of the heuristic can lead to improved performance. We hope this work encourages future research on applying real-time search in dynamic domains.


AlphaStar: Mastering the Real-Time Strategy Game StarCraft II DeepMind

#artificialintelligence

There are several different ways to play the game, but in esports the most common is a 1v1 tournament played over five games. To start, a player must choose to play one of three different alien "races" - Zerg, Protoss or Terran, all of which have distinctive characteristics and abilities (although professional players tend to specialise in one race). Each player starts with a number of worker units, which gather basic resources to build more units and structures and create new technologies. These in turn allow a player to harvest other resources, build more sophisticated bases and structures, and develop new capabilities that can be used to outwit the opponent. To win, a player must carefully balance big-picture management of their economy - known as macro - along with low-level control of their individual units - known as micro.