minigame
The Trolley Solution: the internet's most memed moral dilemma becomes a video game
In 1967, British philosopher Philippa Foot unwittingly created one of the internet's most regurgitated memes. A runaway train is hurtling towards five people tied to the tracks. You can pull a lever to divert the train to a different track to which only one person is tied. Do you intervene to kill the one and spare the five? What if one of the tracks twisted into a really cool loop-the-loop?
- Information Technology > Communications > Networks (0.62)
- Information Technology > Artificial Intelligence > Games (0.42)
Collaborative Quest Completion with LLM-driven Non-Player Characters in Minecraft
Rao, Sudha, Xu, Weijia, Xu, Michael, Leandro, Jorge, Lobb, Ken, DesGarennes, Gabriel, Brockett, Chris, Dolan, Bill
The use of generative AI in video game development is on the rise, and as the conversational and other capabilities of large language models continue to improve, we expect LLM-driven non-player characters (NPCs) to become widely deployed. In this paper, we seek to understand how human players collaborate with LLM-driven NPCs to accomplish in-game goals. We design a minigame within Minecraft where a player works with two GPT4-driven NPCs to complete a quest. We perform a user study in which 28 Minecraft players play this minigame and share their feedback. On analyzing the game logs and recordings, we find that several patterns of collaborative behavior emerge from the NPCs and the human players. We also report on the current limitations of language-only models that do not have rich game-state or visual understanding. We believe that this preliminary study and analysis will inform future game developers on how to better exploit these rapidly improving generative AI models for collaborative roles in games.
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
Massively Multiagent Minigames for Training Generalist Agents
Choe, Kyoung Whan, Sullivan, Ryan, Suárez, Joseph
Meta MMO is built on top of Neural MMO, a massively multiagent environment that has been the subject of two previous NeurIPS competitions. Our work expands Neural MMO with several computationally efficient minigames. We explore generalization across Meta MMO by learning to play several minigames with a single set of weights. We release the environment, baselines, and training code under the MIT license. We hope that Meta MMO will spur additional progress on Neural MMO and, more generally, will serve as a useful benchmark for many-agent generalization.
- Europe > Portugal > Braga > Braga (0.04)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
Hierarchial Reinforcement Learning in StarCraft II with Human Expertise in Subgoals Selection
Xu, Xinyi, Huang, Tiancheng, Wei, Pengfei, Narayan, Akshay, Leong, Tze-Yun
This work is inspired by recent advances in hierarchical reinforcement learning (HRL) (Barto and Mahadevan 2003;Hengst 2010), and improvements in learning efficiency with heuristic-based subgoal selection and hindsight experience replay (HER)(Andrychowicz et al. 2017; Levy et al. 2019). We propose a new method to integrate HRL, HER and effective subgoal selection based on human expertise to support sample-efficient learning and enhance interpretability of the agent's behavior. Human expertise remains indispensable in many areas such as medicine (Buch, Ahmed, and Maruthappu 2018) and law (Cath 2018), where interpretability, explainability and transparency are crucial in the decision making process, for ethical and legal reasons. Our method simplifies the complex task sets for achieving the overall objectives by decomposing into subgoals at different levels of abstraction. Incorporating relevant subjective knowledge also significantly reduces the computational resources spent in exploration for RL, especially in high speed, changing, and complex environments where the transition dynamics cannot be effectively learned and modelled in a short time. Experimental results in two StarCraft II (SC2) minigames demonstrate that our method can achieve better sample efficiency than flat and end-to-end RL methods, and provide an effective method for explaining the agent's performance.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Germany > Baden-Württemberg > Freiburg (0.04)
- (5 more...)
Asynchronous Advantage Actor-Critic Agent for Starcraft II
Alghanem, Basel, G, Keerthana P
Deep reinforcement learning, and especially the Asynchronous Advantage Actor-Critic algorithm, has been successfully used to achieve super-human performance in a variety of video games. Starcraft II is a new challenge for the reinforcement learning community with the release of pysc2 learning environment proposed by Google Deepmind and Blizzard Entertainment. Despite being a target for several AI developers, few have achieved human level performance. In this project we explain the complexities of this environment and discuss the results from our experiments on the environment. We have compared various architectures and have proved that transfer learning can be an effective paradigm in reinforcement learning research for complex scenarios requiring skill transfer.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
Skyrim rendered in text – Filip Hracek – Medium
Going frame-by-frame in our naive start was obviously the wrong move. And going with "kill bandit" obviously made the level of abstraction too high, no matter whether the fight was described in text or represented through a minigame. Let's descend just a little bit from "kill bandit" into a tactics-based approach.