Grammar-based Game Description Generation using Large Language Models

Tanaka, Tsunehiko, Simo-Serra, Edgar

arXiv.org Artificial Intelligence 

--Game Description Language (GDL) provides a standardized way to express diverse games in a machine-readable format, enabling automated game simulation, and evaluation. While previous research has explored game description generation using search-based methods, generating GDL descriptions from natural language remains a challenging task. This paper presents a novel framework that leverages Large Language Models (LLMs) to generate grammatically accurate game descriptions from natural language. Our approach consists of two stages: first, we gradually generate a minimal grammar based on GDL specifications; second, we iteratively improve the game description through grammar-guided generation. Our framework employs a specialized parser that identifies valid subsequences and candidate symbols from LLM responses, enabling gradual refinement of the output to ensure grammatical correctness. Experimental results demonstrate that our iterative improvement approach significantly outperforms baseline methods that directly use LLM outputs. Our code is available at https://github.com/ A Game Description Language (GDL) [1]-[5] is a domain-specific language that expresses a wide range of games in a unified notation. For example, Ludii GDL [5] models over 1,000 games, primarily board games, as shown in Figure 1. Game descriptions represented in GDLs are highly machine-readable, making it easy to simulate gameplay using dedicated game engines. Given the amenability of GDLs for automatic game evaluation, they have been extensively used in research on automated game design. In particular, search-based methods such as evolutionary algorithms [4], MCTS [6], [7], and random forests [8] have proven successful in generating game descriptions. Most research primarily focused on mutating existing games based on fitness functions to generate novel games. However, the task of generating game descriptions from natural language texts has not yet been sufficiently explored, and has the potential to lower the bar of entry to game design to non-specialists. In this research, we use Large Language Models (LLMs) [9], [10], which excel at understanding textual context, to generate game descriptions from natural language text in a two-stage process to enforce grammatical correctness. LLMs are language models with an enormous number of parameters, pre-trained on vast amounts of text data. The authors are with Waseda University, Tokyo, Japan. Their results have shown that more accurate game descriptions can be generated by appropriately refining the prompt context. However, LLMs may still generate grammatically incorrect game descriptions.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found