Goto

Collaborating Authors

Mixed-Initiative Level Design with RL Brush

arXiv.org Artificial Intelligence

Procedurally generated content has been used in games Modern games often rely on procedural content generation since the early 1980s. Early PCG-enabled games like Rogue (PCG) to create large amounts of content autonomously or (Michael Toy, 1980) used PCG to expand the overall depth with limited or no human input. PCG methods are used with of the game by generating dungeons methods as well as coping many different design goals in mind, including enabling a with the hardware limitations of the day (Yannakakis and particular aesthetic. They can also be used to streamline Togelius, 2018). This section will lay out more contemporary time-intensive tasks such as modeling and designing thousands applications and methods of generating game content of unique tree assets for a forest environment.


Spiders: AR app featuring virtual spiders offers cure for arachnophobia

Daily Mail - Science & tech

If the thought of being near a spider terrifies you, there's good news - scientists have created an augmented reality app that overlays a virtual 3D spider on your hand as a cure for arachnophobia. Called Phobys, the free app, created at the University of Basel in Switzerland, is available in both Apple's App Store and Google Play for Android. The Phobys app has already shown itself to be effective in a clinical trial to reduce the severity of arachnophobia, researchers report. Volunteers with arachnophobia experienced less fear when presented with real spiders after they'd used the app at home, researchers found. Augmented reality, or AR, layers computer-generated images on top of real-life surroundings, and is used in apps such as Pokémon Go to bring digital components into the real world.


PCGRL: Procedural Content Generation via Reinforcement Learning

arXiv.org Artificial Intelligence

We investigate how reinforcement learning can be used to train level-designing agents. This represents a new approach to procedural content generation in games, where level design is framed as a game, and the content generator itself is learned. By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action so that the expected final level quality is maximized. This approach can be used when few or no examples exist to train from, and the trained generator is very fast. We investigate three different ways of transforming two-dimensional level design problems into Markov decision processes and apply these to three game environments.


Learning Controllable Content Generators

arXiv.org Artificial Intelligence

It has recently been shown that reinforcement learning can be used to train generators capable of producing high-quality game levels, with quality defined in terms of some user-specified heuristic. To ensure that these generators' output is sufficiently diverse (that is, not amounting to the reproduction of a single optimal level configuration), the generation process is constrained such that the initial seed results in some variance in the generator's output. However, this results in a loss of control over the generated content for the human user. We propose to train generators capable of producing controllably diverse output, by making them "goal-aware." To this end, we add conditional inputs representing how close a generator is to some heuristic, and also modify the reward mechanism to incorporate that value. Testing on multiple domains, we show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner, producing levels of comparable quality as their goal-unaware counterparts, that are diverse along designer-specified dimensions.


Multi-Objective level generator generation with Marahel

arXiv.org Artificial Intelligence

This paper introduces a new system to design constructive level generators by searching the space of constructive level generators defined by Marahel language. We use NSGA-II, a multi-objective optimization algorithm, to search for generators for three different problems (Binary, Zelda, and Sokoban). We restrict the representation to a subset of Marahel language to push the evolution to find more efficient generators. The results show that the generated generators were able to achieve good performance on most of the fitness functions over these three problems. However, on Zelda and Sokoban, they tend to depend on the initial state than modifying the map.