Goto

Collaborating Authors

Validation and Generalizability of Self-Supervised Image Reconstruction Methods for Undersampled MRI

#artificialintelligence

Purpose: To investigate aspects of the validation of self-supervised algorithms for reconstruction of undersampled MR images: quantitative evaluation of prospective reconstructions, potential differences between prospective and retrospective reconstructions, suitability of commonly used quantitative metrics, and generalizability. Theory and Methods: Two self-supervised algorithms based on self-supervised denoising and neural network image priors were investigated. These methods are compared to a least squares fitting and a compressed sensing reconstruction using in-vivo and phantom data. Their generalizability was tested with prospectively under-sampled data from experimental conditions different to the training. Results: Prospective reconstructions can exhibit significant distortion relative to retrospective reconstructions/ground truth.


Improve object discovery with self-supervised transformers using TokenCut

#artificialintelligence

Improve object discovery with self-supervised transformers using TokenCut Self-Supervised Transformers for Unsupervised Object Discovery using Normalized Cut arXiv paper abstract https://arxiv.org/abs/2202.11539 arXiv PDF paper https://arxiv.org/pdf/2202.11539.pdf Transformers trained with self-supervised learning using self-distillation loss (DINO) have been shown to produce attention maps that highlight salient foreground objects. ... In this paper, ... demonstrate a graph-based approach that


Boosting Supervision with Self-Supervision for Few-shot Learning

arXiv.org Machine Learning

We present a technique to improve the transferability of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions. While recent approaches for self-supervised learning have shown the benefits of training on large unlabeled datasets, we find improvements in generalization even on small datasets and when combined with strong supervision. Learning representations with self-supervised losses reduces the relative error rate of a state-of-the-art meta-learner by 5-25% on several few-shot learning benchmarks, as well as off-the-shelf deep networks on standard classification tasks when training from scratch. We find the benefits of self-supervision increase with the difficulty of the task. Our approach utilizes the images within the dataset to construct self-supervised losses and hence is an effective way of learning transferable representations without relying on any external training data.