EditRoom: LLM-parameterized Graph Diffusion for Composable 3D Room Layout Editing
Zheng, Kaizhi, Chen, Xiaotong, He, Xuehai, Gu, Jing, Li, Linjie, Yang, Zhengyuan, Lin, Kevin, Wang, Jianfeng, Wang, Lijuan, Wang, Xin Eric
–arXiv.org Artificial Intelligence
"Sure, I can do that. I will first use remove function to remove the side tables and "I want to remove the side EditRoom is a unified language-guided 3D scene layout editing framework that can automatically execute all layout editing types with natural language commands, which includes the command parameterizer for natural language comprehension and the scene editor for editing execution. Given a source scene and natural language commands, it can generate a coherent and appropriate target scene. Given the steep learning curve of professional 3D software and the timeconsuming process of managing large 3D assets, language-guided 3D scene editing has significant potential in fields such as virtual reality, augmented reality, and gaming. However, recent approaches to language-guided 3D scene editing either require manual interventions or focus only on appearance modifications without supporting comprehensive scene layout changes. In response, we propose Edit-Room, a unified framework capable of executing a variety of layout edits through natural language commands, without requiring manual intervention. Specifically, EditRoom leverages Large Language Models (LLMs) for command planning and generates target scenes using a diffusion-based method, enabling six types of edits: rotate, translate, scale, replace, add, and remove. To address the lack of data for language-guided 3D scene editing, we have developed an automatic pipeline to augment existing 3D scene synthesis datasets and introduced EditRoom-DB, a large-scale dataset with 83k editing pairs, for training and evaluation. Our experiments demonstrate that our approach consistently outperforms other baselines across all metrics, indicating higher accuracy and coherence in language-guided scene layout editing. Traditionally, editing 3D scenes requires manual intervention through specialized software like Blender (Community, 2024), which demands substantial expertise and considerable time for resource management. As a result, language-guided 3D scene editing has emerged as a promising technology for next-generation 3D software. To build an automated system capable of interpreting natural language and manipulating scenes, the system must be able to align complex, diverse, and often ambiguous language commands with various editing actions while also comprehending the global spatial structure of the scene. Additionally, the relatively small size of available 3D scene datasets presents a challenge for developing large-scale pretrained models necessary for fully automated, end-to-end language-guided scene editing.
arXiv.org Artificial Intelligence
Oct-3-2024