Goto

Collaborating Authors

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

#artificialintelligence

The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature.


Boosting Supervision with Self-Supervision for Few-shot Learning

arXiv.org Machine Learning

We present a technique to improve the transferability of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions. While recent approaches for self-supervised learning have shown the benefits of training on large unlabeled datasets, we find improvements in generalization even on small datasets and when combined with strong supervision. Learning representations with self-supervised losses reduces the relative error rate of a state-of-the-art meta-learner by 5-25% on several few-shot learning benchmarks, as well as off-the-shelf deep networks on standard classification tasks when training from scratch. We find the benefits of self-supervision increase with the difficulty of the task. Our approach utilizes the images within the dataset to construct self-supervised losses and hence is an effective way of learning transferable representations without relying on any external training data.



Getting AI to Learn Like a Baby is Goal of Self-Supervised Learning - AI Trends

#artificialintelligence

"Even with a lot of supervised data, AIs can't make the same kinds of generalizations that human children can," Gopnik said. "Their knowledge is much narrower and more limited, and they are easily fooled. Current AIs are like children with super-helicopter-tiger moms--programs that hover over the learner dictating whether it is right or wrong at every step. The helicoptered AI children can be very good at learning to do specific things well, but they fall apart when it comes to resilience and creativity. A small change in the learning problem means that they have to start all over again."


r/MachineLearning - [D] Paper Explained - Planning to Explore via Self-Supervised World Models

#artificialintelligence

What can an agent do without any reward? While many formulations of intrinsic rewards exist (Curiosity, Novelty, etc.), they all look back in time to learn. Plan2Explore is the first model that uses planning in a learned imaginary latent world model to seek out states where it is uncertain about what will happen.