get3d
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications. Prior works on 3D generative modeling either lack geometric details, are limited in the mesh topology they can produce, typically do not support textures, or utilize neural renderers in the synthesis process, which makes their use in common 3D software non-trivial. In this work, we introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high fidelity textures. We bridge recent success in the differentiable surface modeling, differentiable rendering as well as 2D Generative Adversarial Networks to train our model from 2D image collections. GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings, achieving significant improvements over previous methods.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Japan > Honshū > Chūbu > Nagano Prefecture > Nagano (0.04)
GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images
As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident. In our work, we aim to train performant 3D generative models that synthesize textured meshes which can be directly consumed by 3D rendering engines, thus immediately usable in downstream applications. Prior works on 3D generative modeling either lack geometric details, are limited in the mesh topology they can produce, typically do not support textures, or utilize neural renderers in the synthesis process, which makes their use in common 3D software non-trivial. In this work, we introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high fidelity textures. We bridge recent success in the differentiable surface modeling, differentiable rendering as well as 2D Generative Adversarial Networks to train our model from 2D image collections.
Get3D: NVIDIA's New Generative AI Model For 3D Shapes - AI Summary
Get3D is a new generative AI model from NVIDIA that can create 3D shapes. The model was recently added to NVIDIA's Omniverse platform. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning advancements in a concise and easy-to-understand format. The new model was recently added to NVIDIA's marquee Omniverse platform.. "NVIDIA's Get3D is a Generative AI Model for 3D Shapes" is published by Jesus Rodriguez.
How AI Is Changing Web3 Creativity in AR, VR, Virtual Humans, and Other 3D Content
Blockchain enables creators to package and monetize their digital content in new ways. However, it's not the only tech stack that is doing so. Artificial intelligence (AI) is also redefining creativity in the digital space, and here's how. DALL-E, Stable Diffusion, and Midjourney are all generative AI models. They use AI algorithms to automatically generate digital content based on a simple prompt that would otherwise take a human a long time to complete.
NVIDIA AI Research Helps Populate Virtual Worlds With 3D Objects
The massive virtual worlds created by growing numbers of companies and creators could be more easily populated with a diverse array of 3D buildings, vehicles, characters and more -- thanks to a new AI model from NVIDIA Research. Trained using only 2D images, NVIDIA GET3D generates 3D shapes with high-fidelity textures and complex geometric details. These 3D objects are created in the same format used by popular graphics software applications, allowing users to immediately import their shapes into 3D renderers and game engines for further editing. The generated objects could be used in 3D representations of buildings, outdoor spaces or entire cities, designed for industries including gaming, robotics, architecture and social media. GET3D can generate a virtually unlimited number of 3D shapes based on the data it's trained on.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- North America > Canada > Ontario > Toronto (0.05)
NVIDIA's new AI model quickly generates objects and characters for virtual worlds
NVIDIA is looking to take the sting out of creating virtual 3D worlds with a new artificial intelligence model. GET3D can generate characters, buildings, vehicles and other types of 3D objects, NVIDIA says. The model should be able to whip up shapes quickly too. The company notes that GET3D can generate around 20 objects per second using a single GPU. Researchers trained the model using synthetic 2D images of 3D shapes taken from multiple angles.