Goto

Collaborating Authors

mahmoudnafifi/WB_sRGB

#artificialintelligence

Reference code for the paper When Color Constancy Goes Wrong: Correcting Improperly White-Balanced Images. The original source code of our paper was written in Matlab. We provide a Python version of our code. We tried to make both versions identical. However, there is no guarantee that the Python version will give exactly the same results.


Beyond Photo Realism for Domain Adaptation from Synthetic Data

arXiv.org Machine Learning

As synthetic imagery is used more frequently in training deep models, it is important to understand how different synthesis techniques impact the performance of such models. In this work, we perform a thorough evaluation of the effectiveness of several different synthesis techniques and their impact on the complexity of classifier domain adaptation to the "real" underlying data distribution that they seek to replicate. In addition, we propose a novel learned synthesis technique to better train classifier models than state-of-the-art offline graphical methods, while using significantly less computational resources. We accomplish this by learning a generative model to perform shading of synthetic geometry conditioned on a "g-buffer" representation of the scene to render, as well as a low sample Monte Carlo rendered image. The major contributions are (i) a dataset that allows comparison of real and synthetic versions of the same scene, (ii) an augmented data representation that boosts the stability of learning and improves the datasets accuracy, (iii) three different partially differentiable rendering techniques where lighting, denoising and shading are learned, and (iv) we improve a state of the art generative adversarial network (GAN) approach by using an ensemble of trained models to generate datasets that approach the performance of training on real data and surpass the performance of the full global illumination rendering.


Google is working to make its Pixel camera less racist

Mashable

Since the creation of the camera, photography has been technologically optimized to capture white people best. Engineers at Google are trying to change that. At Google's developer conference, Google I/O, Tuesday, the company announced that it's working to re-work the algorithms and tweak the training data that power the Pixel camera in order to more accurately and brilliantly capture people of color. Specifically, it is working to better light people with darker skin and more accurately represent skin tone. Also, silhouettes of people with wavy or curly hair will stand out more sharply from the background.


Equivariant Neural Rendering

arXiv.org Machine Learning

We propose a framework for learning neural scene representations directly from images, without 3D supervision. Our key insight is that 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. Specifically, we introduce a loss which enforces equivariance of the scene representation with respect to 3D transformations. Our formulation allows us to infer and render scenes in real time while achieving comparable results to models requiring minutes for inference. In addition, we introduce two challenging new datasets for scene representation and neural rendering, including scenes with complex lighting and backgrounds. Through experiments, we show that our model achieves compelling results on these datasets as well as on standard ShapeNet benchmarks.


InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset

arXiv.org Artificial Intelligence

Datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM). Without a doubt, synthetic imagery bears a vast potential due to scalability in terms of amounts of data obtainable without tedious manual ground truth annotations or measurements. Here, we present a dataset with the aim of providing a higher degree of photo-realism, larger scale, more variability as well as serving a wider range of purposes compared to existing datasets. Our dataset leverages the availability of millions of professional interior designs and millions of production-level furniture and object assets -- all coming with fine geometric details and high-resolution texture. We render high-resolution and high frame-rate video sequences following realistic trajectories while supporting various camera types as well as providing inertial measurements. Together with the release of the dataset, we will make executable program of our interactive simulator software as well as our renderer available at https://interiornetdataset.github.io. To showcase the usability and uniqueness of our dataset, we show benchmarking results of both sparse and dense SLAM algorithms.