Goto

Collaborating Authors

 swav


ImprovingSelf-supervisedLearningwithAutomated UnsupervisedOutlierArbitration SupplementaryFile

Neural Information Processing Systems

Section 5,Section 6andSection 8 explain more implementation details of the empirical implementation. We use "M" or "S" to distinguish contents inthemain fileorinthesupplementary file.







Review for NeurIPS paper: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

Neural Information Processing Systems

Weaknesses: The paper has many weak points unfortunately. They are presented below as separate categories. Intro/Motivation: The paper focuses too much on "not using momentum encoder", "not using memory bank". All these are largely irrelevant points. Firstly, until one shows one gets no benefit from momentum encoder, it is best not to claim that "not having momentum" is a contribution / a positive aspect of the model.


Improving Pre-Trained Self-Supervised Embeddings Through Effective Entropy Maximization

Chakraborty, Deep, LeCun, Yann, Rudner, Tim G. J., Learned-Miller, Erik

arXiv.org Machine Learning

Self-supervised learning (SSL) methods are widely employed for pre-training features on unlabeled data and are highly effective for subsequent fine-tuning on a wide variety of downstream tasks [Che+20; Gri+20; Car+20; BPL21]. In this paper, we ask whether it is possible to formulate a well-motivated, general-purpose criterion that allows further improving already-trained, highly-optimized SSL embeddings with only a handful of epochs of continued pre-training. Like several previous works [BJ17; WI20; Liu+22; Ozs+22], we start with the principle of maximizing the entropy of embeddings. One well-known motivation for this is that for a discrete embedding space, maximizing the entropy of a deterministic mapping preserves as much information as possible about the inputs. That is, such a maximum-entropy embedding maximizes the mutual information between the embedding and the input distribution [see, for example, Hje+18]. Similar results hold for continuous embeddings under appropriate noise models [see, for example, discussion of the Gaussian channel in CT91]. By maximizing the amount of information retained, one hopes to prepare as well as possible for future, as-yet-unknown, discrimination tasks. Our contribution is thus not the maximization of embedding entropy, but rather how we go about it.


Self-Supervised Multiple Instance Learning for Acute Myeloid Leukemia Classification

Kazeminia, Salome, Joosten, Max, Bosnacki, Dragan, Marr, Carsten

arXiv.org Artificial Intelligence

Automated disease diagnosis using medical image analysis relies on deep learning, often requiring large labeled datasets for supervised model training. Diseases like Acute Myeloid Leukemia (AML) pose challenges due to scarce and costly annotations on a single-cell level. Multiple Instance Learning (MIL) addresses weakly labeled scenarios but necessitates powerful encoders typically trained with labeled data. In this study, we explore Self-Supervised Learning (SSL) as a pre-training approach for MIL-based AML subtype classification from blood smears, removing the need for labeled data during encoder training. We investigate the three state-of-the-art SSL methods SimCLR, SwAV, and DINO, and compare their performance against supervised pre-training. Our findings show that SSL-pretrained encoders achieve comparable performance, showcasing the potential of SSL in MIL. This breakthrough offers a cost-effective and data-efficient solution, propelling the field of AI-based disease diagnosis.


Augmentations vs Algorithms: What Works in Self-Supervised Learning

Morningstar, Warren, Bijamov, Alex, Duvarney, Chris, Friedman, Luke, Kalibhat, Neha, Liu, Luyang, Mansfield, Philip, Rojas-Gomez, Renan, Singhal, Karan, Green, Bradley, Prakash, Sushant

arXiv.org Artificial Intelligence

We study the relative effects of data augmentations, pretraining algorithms, and model architectures in Self-Supervised Learning (SSL). While the recent literature in this space leaves the impression that the pretraining algorithm is of critical importance to performance, understanding its effect is complicated by the difficulty in making objective and direct comparisons between methods. We propose a new framework which unifies many seemingly disparate SSL methods into a single shared template. Using this framework, we identify aspects in which methods differ and observe that in addition to changing the pretraining algorithm, many works also use new data augmentations or more powerful model architectures. We compare several popular SSL methods using our framework and find that many algorithmic additions, such as prediction networks or new losses, have a minor impact on downstream task performance (often less than $1\%$), while enhanced augmentation techniques offer more significant performance improvements ($2-4\%$). Our findings challenge the premise that SSL is being driven primarily by algorithmic improvements, and suggest instead a bitter lesson for SSL: that augmentation diversity and data / model scale are more critical contributors to recent advances in self-supervised learning.