Steps towards prompt-based creation of virtual worlds

Roberts, Jasmine, Banburski-Fahey, Andrzej, Lanier, Jaron

arXiv.org Artificial Intelligence 

Multimodal text-to-image models, like DALL-Large language models trained for code generation can be E 2 [34], Midjourney [11] or Stable Diffusion [35] are applied to speaking virtual worlds into existence (creating raising concerns about displacing concept artists and have virtual worlds). In this work we show that prompt-based already won at least one major art competition [36]. Large methods can both accelerate in-VR level editing, as well Language Models (LLMs), like GPT-3 [6], are not only as can become part of gameplay rather than just part of generating very convincing text completions, but have game development. As an example, we present Codex recently become capable of generating code with models VR Pong which shows non-deterministic game mechanics like OpenAI Codex [8] or AlphaCode [25]. We propose using generative processes to not only create static content in this paper that these capabilities can be combined to but also non-trivial interactions between 3D objects. This allow "speaking the world into existence", or taking natural demonstration naturally leads to an integral discussion on language descriptions and turning them into interactive how one would evaluate and benchmark experiences created visual scenes within a game engine. In particular, this by generative models - as there are no qualitative or has the potential for allowing authoring Virtual Reality quantitative metrics that apply in these scenarios. We conclude (VR) experiences from within the headset, as well as allow by discussing impending challenges of AI-assisted completely novel modes of gameplay.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found