Inductive Learning


Cycle Consistent Adversarial Denoising Network for Multiphase Coronary CT Angiography

arXiv.org Artificial Intelligence

In coronary CT angiography, a series of CT images are taken at different levels of radiation dose during the examination. Although this reduces the total radiation dose, the image quality during the low-dose phases is significantly degraded. To address this problem, here we propose a novel semi-supervised learning technique that can remove the noises of the CT images obtained in the low-dose phases by learning from the CT images in the routine dose phases. Although a supervised learning approach is not possible due to the differences in the underlying heart structure in two phases, the images in the two phases are closely related so that we propose a cycle-consistent adversarial denoising network to learn the non-degenerate mapping between the low and high dose cardiac phases. Experimental results showed that the proposed method effectively reduces the noise in the low-dose CT image while the preserving detailed texture and edge information. Moreover, thanks to the cyclic consistency and identity loss, the proposed network does not create any artificial features that are not present in the input images. Visual grading and quality evaluation also confirm that the proposed method provides significant improvement in diagnostic quality.



Explaining supervised learning to a kid (or your boss)

#artificialintelligence

Now that you know what machine learning is, let's meet the easiest kind. My goal here is to get humans of all stripes and (almost) all ages comfy with its basic jargon: instance, label, feature, model, algorithm, and supervised learning. Instances are also called'examples' or'observations.' What do these examples look like when we put them in a table? Sticking with convention (because good manners are good), each row is an instance.



An Overview of Proxy-label Approaches for Semi-supervised Learning

@machinelearnbot

Note: Parts of this post are based on my ACL 2018 paper Strong Baselines for Neural Semi-supervised Learning under Domain Shift with Barbara Plank. Unsupervised learning constitutes one of the main challenges for current machine learning models and one of the key elements that is missing for general artificial intelligence. While unsupervised learning on its own is still elusive, researchers have a made a lot of progress in combining unsupervised learning with supervised learning. This branch of machine learning research is called semi-supervised learning. Semi-supervised learning has a long history. For a (slightly outdated) overview, refer to Zhu (2005) [1] and Chapelle et al. (2006) [2].


Semi-Supervised Learning with GANs: Revisiting Manifold Regularization

arXiv.org Machine Learning

GANS are powerful generative models that are able to model the manifold of natural images. We leverage this property to perform manifold regularization by approximating the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the feature-matching GAN of Improved GAN, we achieve state-of-the-art results for GAN-based semi-supervised learning on the CIFAR-10 dataset, with a method that is significantly easier to implement than competing methods.


Machine Learning for OpenCV – Supervised Learning

@machinelearnbot

Computer vision is one of today's most exciting application fields of Machine Learning, From self-driving cars to Medical diagnosis, this has been widely used in various domains. This course will take you right from the essential concepts of statistical learning to help you with various algorithms to implement it with other OpenCV tasks. The course will also guide you through creating custom graphs and visualizations, and show you how to go from the raw data to beautiful visualizations. We will also build a machine learning system that can make a medical diagnosis. By the end of this course, you will be ready create your own ML system and will also be able to take on your own machine learning problems.


Supervised learning in disguise: the truth about unsupervised learning

@machinelearnbot

One of the first lessons you'll receive in machine learning is that there are two broad categories: supervised and unsupervised learning. Supervised learning is usually explained as the one to which you provide the correct answers, training data, and the machine learns the patterns to apply to new data. Unsupervised learning is (apparently) where the machine figures out the correct answer on its own. Supposedly, unsupervised learning can discover something new that has not been found in the data before. Supervised learning cannot do that.


AI Defined # 3 Supervised Learning - YouTube

#artificialintelligence

In this video Jon defines the first of 3 types of machine learning: Supervised learning. Supervised machine learning occurs when the machine is given a target.


SaaS: Speed as a Supervisor for Semi-supervised Learning

arXiv.org Machine Learning

We introduce the SaaS Algorithm for semi-supervised learning, which uses learning speed during stochastic gradient descent in a deep neural network to measure the quality of an iterative estimate of the posterior probability of unknown labels. Training speed in supervised learning correlates strongly with the percentage of correct labels, so we use it as an inference criterion for the unknown labels, without attempting to infer the model parameters at first. Despite its simplicity, SaaS achieves state-of-the-art results in semi-supervised learning benchmarks.