Munkberg, Jacob
Generative Detail Enhancement for Physically Based Materials
Hadadan, Saeed, Bitterli, Benedikt, Zeltner, Tizian, Novák, Jan, Rousselle, Fabrice, Munkberg, Jacob, Hasselgren, Jon, Wronski, Bartlomiej, Zwicker, Matthias
We present a tool for enhancing the detail of physically based materials using an off-the-shelf diffusion model and inverse rendering. Our goal is to enhance the visual fidelity of materials with detail that is often tedious to author, by adding signs of wear, aging, weathering, etc. As these appearance details are often rooted in real-world processes, we leverage a generative image model trained on a large dataset of natural images with corresponding visuals in context. Starting with a given geometry, UV mapping, and basic appearance, we render multiple views of the object. We use these views, together with an appearance-defining text prompt, to condition a diffusion model. The details it generates are then backpropagated from the enhanced images to the material parameters via inverse differentiable rendering. For inverse rendering to be successful, the generated appearance has to be consistent across all the images. We propose two priors to address the multi-view consistency of the diffusion model. First, we ensure that the initial noise that seeds the diffusion process is itself consistent across views by integrating it from a view-independent UV space. Second, we enforce geometric consistency by biasing the attention mechanism via a projective constraint so that pixels attend strongly to their corresponding pixel locations in other views. Our approach does not require any training or finetuning of the diffusion model, is agnostic of the material model used, and the enhanced material properties, i.e., 2D PBR textures, can be further edited by artists.
Edify 3D: Scalable High-Quality 3D Asset Generation
NVIDIA, null, :, null, Bala, Maciej, Cui, Yin, Ding, Yifan, Ge, Yunhao, Hao, Zekun, Hasselgren, Jon, Huffman, Jacob, Jin, Jingyi, Lewis, J. P., Li, Zhaoshuo, Lin, Chen-Hsuan, Lin, Yen-Chen, Lin, Tsung-Yi, Liu, Ming-Yu, Luo, Alice, Ma, Qianli, Munkberg, Jacob, Shi, Stella, Wei, Fangyin, Xiang, Donglai, Xu, Jiashu, Zeng, Xiaohui, Zhang, Qinsheng
We introduce Edify 3D, an advanced solution designed for high-quality 3D asset generation. Our method first synthesizes RGB and surface normal images of the described object at multiple viewpoints using a diffusion model. The multi-view observations are then used to reconstruct the shape, texture, and PBR materials of the object. Our method can generate high-quality 3D assets with detailed geometry, clean shape topologies, high-resolution textures, and materials within 2 minutes of runtime.
Flexible Isosurface Extraction for Gradient-Based Mesh Optimization
Shen, Tianchang, Munkberg, Jacob, Hasselgren, Jon, Yin, Kangxue, Wang, Zian, Chen, Wenzheng, Gojcic, Zan, Fidler, Sanja, Sharp, Nicholas, Gao, Jun
This work considers gradient-based mesh optimization, where we iteratively optimize for a 3D surface mesh by representing it as the isosurface of a scalar field, an increasingly common paradigm in applications including photogrammetry, generative modeling, and inverse physics. Existing implementations adapt classic isosurface extraction algorithms like Marching Cubes or Dual Contouring; these techniques were designed to extract meshes from fixed, known fields, and in the optimization setting they lack the degrees of freedom to represent high-quality feature-preserving meshes, or suffer from numerical instabilities. We introduce FlexiCubes, an isosurface representation specifically designed for optimizing an unknown mesh with respect to geometric, visual, or even physical objectives. Our main insight is to introduce additional carefully-chosen parameters into the representation, which allow local flexible adjustments to the extracted mesh geometry and connectivity. These parameters are updated along with the underlying scalar field via automatic differentiation when optimizing for a downstream task. We base our extraction scheme on Dual Marching Cubes for improved topological properties, and present extensions to optionally generate tetrahedral and hierarchically-adaptive meshes. Extensive experiments validate FlexiCubes on both synthetic benchmarks and real-world applications, showing that it offers significant improvements in mesh quality and geometric fidelity.
Noise2Noise: Learning Image Restoration without Clean Data
Lehtinen, Jaakko, Munkberg, Jacob, Hasselgren, Jon, Laine, Samuli, Karras, Tero, Aittala, Miika, Aila, Timo
We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: under certain common circumstances, it is possible to learn to restore signals without ever observing clean ones, at performance close or equal to training using clean exemplars. We show applications in photographic noise removal, denoising of synthetic Monte Carlo images, and reconstruction of MRI scans from undersampled inputs, all based on only observing corrupted data.