Goto

Collaborating Authors

 minigame


The Trolley Solution: the internet's most memed moral dilemma becomes a video game

The Guardian

In 1967, British philosopher Philippa Foot unwittingly created one of the internet's most regurgitated memes. A runaway train is hurtling towards five people tied to the tracks. You can pull a lever to divert the train to a different track to which only one person is tied. Do you intervene to kill the one and spare the five? What if one of the tracks twisted into a really cool loop-the-loop?


Collaborative Quest Completion with LLM-driven Non-Player Characters in Minecraft

Rao, Sudha, Xu, Weijia, Xu, Michael, Leandro, Jorge, Lobb, Ken, DesGarennes, Gabriel, Brockett, Chris, Dolan, Bill

arXiv.org Artificial Intelligence

The use of generative AI in video game development is on the rise, and as the conversational and other capabilities of large language models continue to improve, we expect LLM-driven non-player characters (NPCs) to become widely deployed. In this paper, we seek to understand how human players collaborate with LLM-driven NPCs to accomplish in-game goals. We design a minigame within Minecraft where a player works with two GPT4-driven NPCs to complete a quest. We perform a user study in which 28 Minecraft players play this minigame and share their feedback. On analyzing the game logs and recordings, we find that several patterns of collaborative behavior emerge from the NPCs and the human players. We also report on the current limitations of language-only models that do not have rich game-state or visual understanding. We believe that this preliminary study and analysis will inform future game developers on how to better exploit these rapidly improving generative AI models for collaborative roles in games.


Massively Multiagent Minigames for Training Generalist Agents

Choe, Kyoung Whan, Sullivan, Ryan, Suárez, Joseph

arXiv.org Artificial Intelligence

Meta MMO is built on top of Neural MMO, a massively multiagent environment that has been the subject of two previous NeurIPS competitions. Our work expands Neural MMO with several computationally efficient minigames. We explore generalization across Meta MMO by learning to play several minigames with a single set of weights. We release the environment, baselines, and training code under the MIT license. We hope that Meta MMO will spur additional progress on Neural MMO and, more generally, will serve as a useful benchmark for many-agent generalization.


Hierarchial Reinforcement Learning in StarCraft II with Human Expertise in Subgoals Selection

Xu, Xinyi, Huang, Tiancheng, Wei, Pengfei, Narayan, Akshay, Leong, Tze-Yun

arXiv.org Artificial Intelligence

This work is inspired by recent advances in hierarchical reinforcement learning (HRL) (Barto and Mahadevan 2003;Hengst 2010), and improvements in learning efficiency with heuristic-based subgoal selection and hindsight experience replay (HER)(Andrychowicz et al. 2017; Levy et al. 2019). We propose a new method to integrate HRL, HER and effective subgoal selection based on human expertise to support sample-efficient learning and enhance interpretability of the agent's behavior. Human expertise remains indispensable in many areas such as medicine (Buch, Ahmed, and Maruthappu 2018) and law (Cath 2018), where interpretability, explainability and transparency are crucial in the decision making process, for ethical and legal reasons. Our method simplifies the complex task sets for achieving the overall objectives by decomposing into subgoals at different levels of abstraction. Incorporating relevant subjective knowledge also significantly reduces the computational resources spent in exploration for RL, especially in high speed, changing, and complex environments where the transition dynamics cannot be effectively learned and modelled in a short time. Experimental results in two StarCraft II (SC2) minigames demonstrate that our method can achieve better sample efficiency than flat and end-to-end RL methods, and provide an effective method for explaining the agent's performance.


Asynchronous Advantage Actor-Critic Agent for Starcraft II

Alghanem, Basel, G, Keerthana P

arXiv.org Artificial Intelligence

Deep reinforcement learning, and especially the Asynchronous Advantage Actor-Critic algorithm, has been successfully used to achieve super-human performance in a variety of video games. Starcraft II is a new challenge for the reinforcement learning community with the release of pysc2 learning environment proposed by Google Deepmind and Blizzard Entertainment. Despite being a target for several AI developers, few have achieved human level performance. In this project we explain the complexities of this environment and discuss the results from our experiments on the environment. We have compared various architectures and have proved that transfer learning can be an effective paradigm in reinforcement learning research for complex scenarios requiring skill transfer.


Skyrim rendered in text – Filip Hracek – Medium

#artificialintelligence

Going frame-by-frame in our naive start was obviously the wrong move. And going with "kill bandit" obviously made the level of abstraction too high, no matter whether the fight was described in text or represented through a minigame. Let's descend just a little bit from "kill bandit" into a tactics-based approach.