Goto

Collaborating Authors

Validation and Generalizability of Self-Supervised Image Reconstruction Methods for Undersampled MRI

#artificialintelligence

Purpose: To investigate aspects of the validation of self-supervised algorithms for reconstruction of undersampled MR images: quantitative evaluation of prospective reconstructions, potential differences between prospective and retrospective reconstructions, suitability of commonly used quantitative metrics, and generalizability. Theory and Methods: Two self-supervised algorithms based on self-supervised denoising and neural network image priors were investigated. These methods are compared to a least squares fitting and a compressed sensing reconstruction using in-vivo and phantom data. Their generalizability was tested with prospectively under-sampled data from experimental conditions different to the training. Results: Prospective reconstructions can exhibit significant distortion relative to retrospective reconstructions/ground truth.


The First High-Performance Self-Supervised Algorithm That Works For Speech, Vision, And Text - Liwaiwai

#artificialintelligence

But while people appear to learn in a similar way regardless of how they get information -- whether they use sight or sound, for example -- there are currently big differences in the way self-supervised learning algorithms learn from images, speech, text, and other modalities. This discrepancy has been a significant barrier to applying advances in self-supervised learning more broadly. Because a powerful algorithm designed for, say, understanding images can't be directly applied to another modality, such as text, it is difficult to push several modalities ahead at the same rate. This is why Meta AI developed and is excited to announce data2vec, the first high-performance self-supervised algorithm that works for multiple modalities. We apply data2vec separately to speech, images and text and it outperformed the previous best single-purpose algorithms for computer vision and speech and it is competitive on NLP tasks.


Improve object discovery with self-supervised transformers using TokenCut

#artificialintelligence

Improve object discovery with self-supervised transformers using TokenCut Self-Supervised Transformers for Unsupervised Object Discovery using Normalized Cut arXiv paper abstract https://arxiv.org/abs/2202.11539 arXiv PDF paper https://arxiv.org/pdf/2202.11539.pdf Transformers trained with self-supervised learning using self-distillation loss (DINO) have been shown to produce attention maps that highlight salient foreground objects. ... In this paper, ... demonstrate a graph-based approach that


Deepmind Researchers Propose 'ReLICv2': Pushing The Limits of Self-Supervised ResNets

#artificialintelligence

The supervised learning architectures generally require a massive amount of labeled data. Acquiring this vast amount of high-quality labeled data can turn out to be a very costly and time-consuming task. The main idea behind self-supervised methods in deep learning is to learn the patterns from a given set of unlabelled data and fine-tune the model with few labeled data. Self-supervised learning using residual networks has recently progressed, but they still underperform by a large margin corresponding to supervised residual network models on ImageNet classification benchmarks. This poor performance has rendered the use of self-supervised models in performance-critical scenarios till this point.


@Scale 2019: Unique challenges and opportunities for self-supervised learning in autonomous driving

#artificialintelligence

Autonomous vehicles generate a lot of raw (unlabeled) data every minute. But only a small fraction of that data can be labeled manually. Ashesh focuses on how we leverage unlabeled data for tasks on perception and prediction in a self-supervised manner. He touches on a few unique ways to achieve this goal in the AV land, including cross-modal self-supervised learning, in which one modality can serve as a learning signal for another modality without the need for labeling. Another approach he touches on is using outputs from large-scale optimization as a learning signal to train neural networks, which is done by mimicking their outputs but running in real-time on the AV.