Goto

Collaborating Authors

 pug



PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

Neural Information Processing Systems

Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation.Despite such promise, the use of synthetic image data is still limited -- and often played down -- mainly due to their lack of realism.



PUGS: Perceptual Uncertainty for Grasp Selection in Underwater Environments

Bagoren, Onur, Micatka, Marc, Skinner, Katherine A., Marburg, Aaron

arXiv.org Artificial Intelligence

When navigating and interacting in challenging environments where sensory information is imperfect and incomplete, robots must make decisions that account for these shortcomings. We propose a novel method for quantifying and representing such perceptual uncertainty in 3D reconstruction through occupancy uncertainty estimation. We develop a framework to incorporate it into grasp selection for autonomous manipulation in underwater environments. Instead of treating each measurement equally when deciding which location to grasp from, we present a framework that propagates uncertainty inherent in the multi-view reconstruction process into the grasp selection. We evaluate our method with both simulated and the real world data, showing that by accounting for uncertainty, the grasp selection becomes robust against partial and noisy measurements. Code will be made available at https://onurbagoren.github.io/PUGS/


PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

Neural Information Processing Systems

Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation.Despite such promise, the use of synthetic image data is still limited -- and often played down -- mainly due to their lack of realism. We use the Unreal Engine, a powerful game engine well known in the entertainment industry, to produce PUG (Photorealistic Unreal Graphics) environments and datasets for representation learning. Using PUG for evaluation and fine-tuning, we demonstrate the potential of PUG to both enable more rigorous evaluations and to improve model training.


PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

Bordes, Florian, Shekhar, Shashank, Ibrahim, Mark, Bouchacourt, Diane, Vincent, Pascal, Morcos, Ari S.

arXiv.org Artificial Intelligence

Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation. Despite such promise, the use of synthetic image data is still limited -- and often played down -- mainly due to their lack of realism. Most works therefore rely on datasets of real images, which have often been scraped from public images on the internet, and may have issues with regards to privacy, bias, and copyright, while offering little control over how objects precisely appear. In this work, we present a path to democratize the use of photorealistic synthetic data: we develop a new generation of interactive environments for representation learning research, that offer both controllability and realism. We use the Unreal Engine, a powerful game engine well known in the entertainment industry, to produce PUG (Photorealistic Unreal Graphics) environments and datasets for representation learning. In this paper, we demonstrate the potential of PUG to enable more rigorous evaluations of vision models.


The Story Behind em The Mitchells vs. the Machines /em ' Killer Furbies

Slate

The arrival of The Mitchells vs. the Machines on Netflix feels like the detonation of a confetti bomb--it's a colorful, inventive, and all-around delightful movie. In fact, as my colleague Sam Adams wrote for Slate, it's the first great animated movie of 2021. Directed by Mike Rianda and co-directed by Jeff Rowe, the movie stars Abbi Jacobson as Katie, a girl about to head to college, and Danny McBride, Maya Rudolph, and Rianda respectively as her father, mother, and younger brother Aaron, all of whom join her on a road trip in an attempt at a last hurrah before she flies the coop. That trip hits a bit of a road bump, however, when a robot uprising threatens the entire human race. One of the biggest--and funniest--set pieces of the film involves the Mitchell family having to fight a horde of Furby dolls.


Measuring Machine Learning's Potential

#artificialintelligence

When it comes to AI in drug discovery we don't yet know the limits of its abilities, though we are making progress. What can it do, what can't it do, and what factors will determine the answers to those questions? With the help of Bryn Roberts, Global Head of Operations for Roche Pharmaceutical Research & Early Development in Basel, Switzerland, here we discuss some of the factors that will help us set the upper and lower limits AI's capabilities. In order for a machine to learn, it must be fed categorized data. For example, if you wanted to train a machine to recognize human faces in pictures, you would need to upload pictures and point out the faces in each one in a way the computer can understand.