Goto

Collaborating Authors

 instruct-nerf2nerf




SIn-NeRF2NeRF: Editing 3D Scenes with Instructions through Segmentation and Inpainting

Hong, Jiseung, Lee, Changmin, Yu, Gyusang

arXiv.org Artificial Intelligence

TL;DR Perform 3D object editing selectively by disentangling it from the background scene. Instruct-NeRF2NeRF (in2n) is a promising method that enables editing of 3D scenes composed of Neural Radiance Field (NeRF) using text prompts. However, it is challenging to perform geometrical modifications such as shrinking, scaling, or moving on both the background and object simultaneously. In this project, we enable geometrical changes of objects within the 3D scene by selectively editing the object after separating it from the scene. We perform object segmentation and background inpainting respectively, and demonstrate various examples of freely resizing or moving disentangled objects within the three-dimensional space.


InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes

Shahbazi, Mohamad, Claessens, Liesbeth, Niemeyer, Michael, Collins, Edo, Tonioni, Alessio, Van Gool, Luc, Tombari, Federico

arXiv.org Artificial Intelligence

We introduce InseRF, a novel method for generative object insertion in the NeRF reconstructions of 3D scenes. Based on a user-provided textual description and a 2D bounding box in a reference viewpoint, InseRF generates new objects in 3D scenes. Recently, methods for 3D scene editing have been profoundly transformed, owing to the use of strong priors of text-to-image diffusion models in 3D generative modeling. Existing methods are mostly effective in editing 3D scenes via style and appearance changes or removing existing objects. Generating new objects, however, remains a challenge for such methods, which we address in this study. Specifically, we propose grounding the 3D object insertion to a 2D object insertion in a reference view of the scene. The 2D edit is then lifted to 3D using a single-view object reconstruction method. The reconstructed object is then inserted into the scene, guided by the priors of monocular depth estimation methods. We evaluate our method on various 3D scenes and provide an in-depth analysis of the proposed components. Our experiments with generative insertion of objects in several 3D scenes indicate the effectiveness of our method compared to the existing methods. InseRF is capable of controllable and 3D-consistent object insertion without requiring explicit 3D information as input. Please visit our project page at https://mohamad-shahbazi.github.io/inserf.


Meet Instruct-NeRF2NeRF: An AI Method For Editing 3D Scenes With Text-Instructions - MarkTechPost

#artificialintelligence

It has never been simpler to capture a realistic digital representation of a real-world 3D scene, thanks to the development of effective neural 3D reconstruction techniques. They anticipate that because it is so user-friendly, recorded 3D content will progressively replace manually-generated components. While the pipelines for converting a real scene into a 3D representation are quite established and easily available, many of the additional tools required to develop 3D assets, such as those needed for editing 3D scenes, are still in their infancy. Traditionally, manually sculpting, extruding, and retexturing an item required specialized tools and years of skill when modifying 3D models. This process is significantly more complicated as neuronal representations frequently need explicit surfaces.