Collaborating Authors

Unsupervised or Indirectly Supervised Learning

NVIDIA AI Releases StyleGAN3: Alias-Free Generative Adversarial Networks


The recent advances in the quality and resolution of Generative adversarial networks (GAN) have seen a rapid improvement. These techniques are used for various applications, including image editing, domain translation, or video generation, to name just some examples. While several ways to control GANs' generative process have been found, there is still not much known about their synthesis abilities. In 2019, Nvidia launched its second version of StyleGAN by fixing artifacts features and further improving generated images' quality. StyleGAN being the first of its type image generation method to generate very real images was open-sourced in February 2019.

Generative Adversal Networks in Machine Learning


GANs is one of the helpful techniques from Machine Learning related to photo editing. A Generative Adversarial Network also known as -- GAN is a group of Machine Learning. It was designed by Ian Goodfellow and his colleagues in 2014. Initially, they were put forward as a generative model for unsupervised learning but they are being extremely useful for semisupervised learning, supervised learning, and also for reinforcement learning. They are created with the help of two neural networks that compete with each other and have the ability to create new output by analyzing, capturing, and copying the variation from the given datasets.

Unsupervised Learning: What, Why, and Where?


Most of the time people start their machine learning journey with few basic techniques in which Unsupervised Learning, Supervised Learning, and Reinforcement Learning is the major ones. For any effective business operations, good use of information plays a vital role. However, at some point, the information goes beyond simple processing capacity. For that matter, machine learning plays its part. Before anything happens, information needs to be explored and certain processing needs to be done on it.

Semi-supervised learning made simple


Semi-supervised learning is a machine learning technique of deriving useful information from both labelled and unlabelled data. Before doing this tutorial, you should have basic familiarity with supervised learning on images with PyTorch. We will omit reinforcement learning here and concentrate on the first two types. In supervised learning, our data consists of labelled objects. A machine learning model is tasked with learning how to assign labels (or values) to objects.

Machine Learning in Java


Machine Learning (ML) has bought significant promises in different fields in both academia and industry. Day by day, ML has grown its engagement in a comprehensive list of applications such as image, speech recognition, pattern recognition, optimization, natural language processing, and recommendations, and so many others. Programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort. Machine Learning can be divided into four main techniques: regression, classification, clustering, and reinforcement learning. Those techniques solve problems with different natures in mainly two forms: supervised and unsupervised learning.

Fundamentals of Machine Learning & Deep Learning


Machine Learning can be defined as an approach to achieve artificial intelligence through systems or software models that can learn from experience to find patterns in a set of data. Google uses artificial intelligence and machine learning in almost all of its applications. Google Photos display photos related to your search terms and animate similar photos from your albums into quick videos. Gmail suggest phrases and complete sentences in emails. Google Assistant can take over real-world tasks such as booking a haircut appointment over phone.

Unsupervised Abstract Reasoning for Raven's Problem Matrices Artificial Intelligence

Raven's Progressive Matrices (RPM) is highly correlated with human intelligence, and it has been widely used to measure the abstract reasoning ability of humans. In this paper, to study the abstract reasoning capability of deep neural networks, we propose the first unsupervised learning method for solving RPM problems. Since the ground truth labels are not allowed, we design a pseudo target based on the prior constraints of the RPM formulation to approximate the ground truth label, which effectively converts the unsupervised learning strategy into a supervised one. However, the correct answer is wrongly labelled by the pseudo target, and thus the noisy contrast will lead to inaccurate model training. To alleviate this issue, we propose to improve the model performance with negative answers. Moreover, we develop a decentralization method to adapt the feature representation to different RPM problems. Extensive experiments on three datasets demonstrate that our method even outperforms some of the supervised approaches. Our code is available at

Machine Learning in World of Genomics and Genetics


Genetics: DNA(Deoxyribonucleic acid) is a double helix that carries genetic info of development, functioning, growth, and reproduction of all organisms and viruses too! Each and Every infant inherits genes from their biological parents. And the study of these genes is Genetics. Most of us have two copies of the genome (contains genes as well as Noncoding DNA, the study of this is genomics) with 6Billion pairs of DNA! In order to reach our desired requirements, we must have an approach or methods to achieve it. Machine Learning essentially has three such methods in order to tackle the maximum number of our requirements.

OpenAI's CLIP is the most important advancement in computer vision this year


CLIP is a gigantic leap forward, bringing many of the recent developments from the realm of natural language processing into the mainstream of computer vision: unsupervised learning, transformers, and multimodality to name a few. The burst of innovation it has inspired shows its versatility. And this is likely just the beginning. There has been scuttlebutt recently about the coming age of "foundation models" in artificial intelligence that will underpin the state of the art across many different problems in AI; I think CLIP is going to turn out to be the bedrock model for computer vision. In this post, we aim to catalog the continually expanding use-cases for CLIP; we will update it periodically.

FedCon: A Contrastive Framework for Federated Semi-Supervised Learning


Federated Semi-Supervised Learning (FedSSL) has gained rising attention from both academic and industrial researchers, due to its unique characteristics of co-training machine learning models with isolated yet unlabeled data. Most existing FedSSL methods focus on the classical scenario, i.e, the labeled and unlabeled data are stored at the client side. However, in real world applications, client users may not provide labels without any incentive. Thus, the scenario of labels at the server side is more practical. Since unlabeled data and labeled data are decoupled, most existing FedSSL approaches may fail to deal with such a scenario. To overcome this problem, in this paper, we propose FedCon, which introduces a new learning paradigm, i.e., contractive learning, to FedSSL.