Ozguroglu, Ege
Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis
Van Hoorick, Basile, Wu, Rundi, Ozguroglu, Ege, Sargent, Kyle, Liu, Ruoshi, Tokmakov, Pavel, Dave, Achal, Zheng, Changxi, Vondrick, Carl
Accurate reconstruction of complex dynamic scenes from just a single viewpoint continues to be a challenging task in computer vision. Current dynamic novel view synthesis methods typically require videos from many different camera viewpoints, necessitating careful recording setups, and significantly restricting their utility in the wild as well as in terms of embodied AI applications. In this paper, we propose $\textbf{GCD}$, a controllable monocular dynamic view synthesis pipeline that leverages large-scale diffusion priors to, given a video of any scene, generate a synchronous video from any other chosen perspective, conditioned on a set of relative camera pose parameters. Our model does not require depth as input, and does not explicitly model 3D scene geometry, instead performing end-to-end video-to-video translation in order to achieve its goal efficiently. Despite being trained on synthetic multi-view video data only, zero-shot real-world generalization experiments show promising results in multiple domains, including robotics, object permanence, and driving environments. We believe our framework can potentially unlock powerful applications in rich dynamic scene understanding, perception for robotics, and interactive 3D video viewing experiences for virtual reality.
Dreamitate: Real-World Visuomotor Policy Learning via Video Generation
Liang, Junbang, Liu, Ruoshi, Ozguroglu, Ege, Sudhakar, Sruthi, Dave, Achal, Tokmakov, Pavel, Song, Shuran, Vondrick, Carl
A key challenge in manipulation is learning a policy that can robustly generalize to diverse visual environments. A promising mechanism for learning robust policies is to leverage video generative models, which are pretrained on large-scale datasets of internet videos. In this paper, we propose a visuomotor policy learning framework that fine-tunes a video diffusion model on human demonstrations of a given task. At test time, we generate an example of an execution of the task conditioned on images of a novel scene, and use this synthesized execution directly to control the robot. Our key insight is that using common tools allows us to effortlessly bridge the embodiment gap between the human hand and the robot manipulator. We evaluate our approach on four tasks of increasing complexity and demonstrate that harnessing internet-scale generative models allows the learned policy to achieve a significantly higher degree of generalization than existing behavior cloning approaches.
pix2gestalt: Amodal Segmentation by Synthesizing Wholes
Ozguroglu, Ege, Liu, Ruoshi, Surís, Dídac, Chen, Dian, Dave, Achal, Tokmakov, Pavel, Vondrick, Carl
Our approach capitalizes on diffusion models and transferring their representations to denoising diffusion models [14], which are excellent representations this task, we learn a conditional diffusion model for reconstructing of the natural image manifold and capture all whole objects in challenging zero-shot cases, including different types of whole objects and their occlusions. Due examples that break natural and physical priors, to their large-scale training data, we hypothesize such pretrained such as art. As training data, we use a synthetically curated models have implicitly learned amodal representations dataset containing occluded objects paired with their whole (Figure 2), which we can reconfigure to encode object counterparts. Experiments show that our approach outperforms grouping and perform amodal completion. By learning supervised baselines on established benchmarks. Our from a synthetic dataset of occlusions and their whole counterparts, model can furthermore be used to significantly improve the we create a conditional diffusion model that, given performance of existing object recognition and 3D reconstruction an RGB image and a point prompt, generates whole objects methods in the presence of occlusions.