Goto

Collaborating Authors

Unsupervised or Indirectly Supervised Learning


Perceived Realism of High-Resolution Generative Adversarial Network–derived Synthetic Mammograms

#artificialintelligence

To explore whether generative adversarial networks (GANs) can enable synthesis of realistic medical images that are indiscernible from real images, even by domain experts. In this retrospective study, progressive growing GANs were used to synthesize mammograms at a resolution of 1280 1024 pixels by using images from 90 000 patients (average age, 56 years 9) collected between 2009 and 2019. To evaluate the results, a method to assess distributional alignment for ultra–high-dimensional pixel distributions was used, which was based on moment plots. This method was able to reveal potential sources of misalignment. A total of 117 volunteer participants (55 radiologists and 62 nonradiologists) took part in a study to assess the realism of synthetic images from GANs.


Unsupervised Learning: How Machines Learn on Their Own

#artificialintelligence

This type of machine learning (ML) grants AI applications the ability to learn and find hidden patterns in large datasets without human supervision. Unsupervised learning is also crucial for achieving artificial general intelligence. Labeling data is labor-intensive and time-consuming, and in many cases, impractical. That's where unsupervised learning brings a big difference by granting AI applications the ability to learn without labels and supervision. Unsupervised learning (UL) is a machine learning technique used to identify patterns in datasets containing unclassified and unlabeled data points. In this learning method, an AI system is given only the input data and no corresponding output data.


ProteinGAN: A generative adversarial network that generates functional protein sequences

#artificialintelligence

Proteins are large, highly complex and naturally occurring molecules can be found in all living organisms. These unique substances, which consist of amino acids joined together by peptide bonds to form long chains, can have a variety of functions and properties. The specific order in which different amino acids are arranged to form a given protein ultimately determines the protein's 3D structure, physicochemical properties and molecular function. While scientists have been studying proteins for decades, designing proteins that elicit specific chemical reactions has so far proved to be highly challenging. Researchers at Biomatter Designs, Vilnius University in Lithuania, and Chalmers University of Technology in Sweden have recently developed ProteinGAN, a generative adversarial network (GAN) that can process and'learn' different natural protein sequences.


Unsupervised Learning -- A Complete Overview

#artificialintelligence

It is a Machine Learning technique in which instead of learning from training dataset(as in supervised learning), here model itself find hidden patterns and insights from the data. It create groups based on some similarity even without knowing what each group represent. In this article i will not explain each and every thing in brief, i will only give an short overview about the different types of unsupervised learning. Most of time peoples ask me what is Unsupervised Learning and how many types of it, i googled it but didn't get an perfect answers of this question this is only my motto of writing this article. Example: Suppose we have group of students belongs to different university's and we have to group them based on some feature, now we give this responsibility to unsupervised algorithms.


Machine Learning

#artificialintelligence

Machine learning algorithms all aim to learn and improve their accuracy as they process more datasets. One way that we can classify the tasks that machine learning algorithms solve is by how much feedback they present to the system. In some scenarios, the computer is provided a significant amount of labelled training data is provided, which is called supervised learning. In other cases, no labelled data is provided and this is known as unsupervised learning. Lastly, in semi-supervised learning, some labelled training data is provided, but most of the training data is unlabelled.


Incremental Semi-Supervised Learning Through Optimal Transport

arXiv.org Machine Learning

Semi-supervised learning provides an effective paradigm for leveraging unlabeled data to improve a model\s performance. Among the many strategies proposed, graph-based methods have shown excellent properties, in particular since they allow to solve directly the transductive tasks according to Vapnik\s principle and they can be extended efficiently for inductive tasks. In this paper, we propose a novel approach for the transductive semi-supervised learning, using a complete bipartite edge-weighted graph. The proposed approach uses the regularized optimal transport between empirical measures defined on labelled and unlabelled data points in order to obtain an affinity matrix from the optimal transport plan. This matrix is further used to propagate labels through the vertices of the graph in an incremental process ensuring the certainty of the predictions by incorporating a certainty score based on Shannon\s entropy. We also analyze the convergence of our approach and we derive an efficient way to extend it for out-of-sample data. Experimental analysis was used to compare the proposed approach with other label propagation algorithms on 12 benchmark datasets, for which we surpass state-of-the-art results. We release our code.


SSD: A Unified Framework for Self-Supervised Outlier Detection

arXiv.org Artificial Intelligence

We ask the following question: what training information is required to design an effective outlier/out-of-distribution (OOD) detector, i.e., detecting samples that lie far away from the training distribution? Since unlabeled data is easily accessible for many applications, the most compelling approach is to develop detectors based on only unlabeled in-distribution data. However, we observe that most existing detectors based on unlabeled data perform poorly, often equivalent to a random prediction. In contrast, existing state-of-the-art OOD detectors achieve impressive performance but require access to fine-grained data labels for supervised training. We propose SSD, an outlier detector based on only unlabeled in-distribution data. We use self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space. We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin. Additionally, SSD even achieves performance on par, and sometimes even better, with supervised training based detectors. Finally, we expand our detection framework with two key extensions. First, we formulate few-shot OOD detection, in which the detector has access to only one to five samples from each class of the targeted OOD dataset. Second, we extend our framework to incorporate training data labels, if available. We find that our novel detection framework based on SSD displays enhanced performance with these extensions, and achieves state-of-the-art performance. Our code is publicly available at https://github.com/inspire-group/SSD.


Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework

arXiv.org Artificial Intelligence

Supervised learning based object detection frameworks demand plenty of laborious manual annotations, which may not be practical in real applications. Semi-supervised object detection (SSOD) can effectively leverage unlabeled data to improve the model performance, which is of great significance for the application of object detection models. In this paper, we revisit SSOD and propose Instant-Teaching, a completely end-to-end and effective SSOD framework, which uses instant pseudo labeling with extended weak-strong data augmentations for teaching during each training iteration. To alleviate the confirmation bias problem and improve the quality of pseudo annotations, we further propose a co-rectify scheme based on Instant-Teaching, denoted as Instant-Teaching$^*$. Extensive experiments on both MS-COCO and PASCAL VOC datasets substantiate the superiority of our framework. Specifically, our method surpasses state-of-the-art methods by 4.2 mAP on MS-COCO when using $2\%$ labeled data. Even with full supervised information of MS-COCO, the proposed method still outperforms state-of-the-art methods by about 1.0 mAP. On PASCAL VOC, we can achieve more than 5 mAP improvement by applying VOC07 as labeled data and VOC12 as unlabeled data.


Unsupervised Learning: How Machines Learn on Their Own

#artificialintelligence

Exclusive clustering: As the name suggests, exclusive clustering specifies that a data point or object can exist only in one cluster. Hierarchical clustering: Hierarchical tries to create a hierarchy of clusters. There are two types of hierarchical clustering: agglomerative and divisive. Agglomerative follows the bottom-up approach, initially treats each data point as an individual cluster, and the pairs of clusters are merged as they move up the hierarchy. Divisive is the very opposite of agglomerative.


SPICE: Semantic Pseudo-labeling for Image Clustering

arXiv.org Artificial Intelligence

This paper presents SPICE, a Semantic Pseudo-labeling framework for Image ClustEring. Instead of using indirect loss functions required by the recently proposed methods, SPICE generates pseudo-labels via self-learning and directly uses the pseudo-label-based classification loss to train a deep clustering network. The basic idea of SPICE is to synergize the discrepancy among semantic clusters, the similarity among instance samples, and the semantic consistency of local samples in an embedding space to optimize the clustering network in a semantically-driven paradigm. Specifically, a semantic-similarity-based pseudo-labeling algorithm is first proposed to train a clustering network through unsupervised representation learning. Given the initial clustering results, a local semantic consistency principle is used to select a set of reliably labeled samples, and a semi-pseudo-labeling algorithm is adapted for performance boosting. Extensive experiments demonstrate that SPICE clearly outperforms the state-of-the-art methods on six common benchmark datasets including STL10, Cifar10, Cifar100-20, ImageNet-10, ImageNet-Dog, and Tiny-ImageNet. On average, our SPICE method improves the current best results by about 10% in terms of adjusted rand index, normalized mutual information, and clustering accuracy.