If game characters could learn through interacting with the player, behavior could improve as the game is played, keeping it interesting. The real-time NeuroEvolution of Augmenting Topologies (rtNEAT) method, which can evolve increasingly complex artificial neural networks in real time as a game is being played, will be presented. The rtNEAT method makes possible an entirely new genre of video games in which the player trains a team of agents through a series of customized exercises. In order to demonstrate this concept, the NeuroEvolving Robotic Operatives (NERO) game was built based on rtNEAT. In NERO, the player trains a team of virtual robots for combat against other players' teams. The live demo will show how agents in NERO adapt in real time as they interact with the player. In the future, rtNEAT may allow new kinds of educational and training applications through interactive and adapting games.
Nvidia revealed the boundary-pushing GeForce RTX 20-series on Monday, unleashing GeForce RTX 2070, RTX 2080, and RTX 2080 Ti graphics cards brimming with fancy new tech that promises to support fancy new gaming capabilities. Foremost among those feats is real-time ray tracing, the ultra-difficult realistic lighting technology that gives Nvidia's new cards their "RTX" moniker. The RTX cards also support Deep Learning Super-Sampling (DLSS), a fresh Nvidia super-sampling method that puts the AI tensors cores embedded within the GPUs to work. Now, we know which PC games will support them--a crucial step, since all the luxurious tech in the world means nothing if games don't actually tap into it. Both real-time ray tracing and DLSS will debut with a solid backing, as made clear by Nvidia's games partner announcement.
It's funny the ways pop culture conditions our sense of how humans and machine intelligence (MI) might interact. When Threepio warns Han Solo that it's almost impossible to successfully navigate an asteroid field, how does Solo respond? Artificial intelligence abounds in that galaxy far, far away--but not with much obvious effect on human decision-making. Iron Man offers a different take on MI. Tony Stark builds intelligence into almost every aspect of his life--managing smart devices in his home, helping him engineer new inventions, even offering real-time analysis to help him counter opponents in combat.
Most of the games that machines can now challenge humans in are strategic, but slow: Chess, Go and poker, unless played in very specific settings, have no time constraints on player moves. That is what has made the work of research group OpenAI, in online team brawler Dota 2 - which requires real-time decision-making between potentially dozens of choices in a single frame - so different. OpenAI's bots, the OpenAI Five, went head-to-head against teams of professional players at Dota 2's annual championship, The International, this August. Although the bots lost, the matches provided an insight into how reinforcement learning is changing the game when it comes to artificial intelligence. It's safe to say that AI has a reputation in gaming: many players consider a match to be an instant loss if they have to play with a bot, and a disconnect is often accompanied by "GG".
There are several different ways to play the game, but in esports the most common is a 1v1 tournament played over five games. To start, a player must choose to play one of three different alien "races" - Zerg, Protoss or Terran, all of which have distinctive characteristics and abilities (although professional players tend to specialise in one race). Each player starts with a number of worker units, which gather basic resources to build more units and structures and create new technologies. These in turn allow a player to harvest other resources, build more sophisticated bases and structures, and develop new capabilities that can be used to outwit the opponent. To win, a player must carefully balance big-picture management of their economy - known as macro - along with low-level control of their individual units - known as micro.