Goto

Collaborating Authors

 Unsupervised or Indirectly Supervised Learning


Bayesian Conditional Generative Adverserial Networks

arXiv.org Machine Learning

Traditional GANs use a deterministic generator function (typically a neural network) to transform a random noise input $z$ to a sample $\mathbf{x}$ that the discriminator seeks to distinguish. We propose a new GAN called Bayesian Conditional Generative Adversarial Networks (BC-GANs) that use a random generator function to transform a deterministic input $y'$ to a sample $\mathbf{x}$. Our BC-GANs extend traditional GANs to a Bayesian framework, and naturally handle unsupervised learning, supervised learning, and semi-supervised learning problems. Experiments show that the proposed BC-GANs outperforms the state-of-the-arts.


[R] Variational Approaches for Auto-Encoding Generative Adversarial Networks โ€ข r/MachineLearning

@machinelearnbot

I just want to emphasise for any readers that the density ratio approximation is tight only when the discriminator is optimal. Given that they update the discriminator even less than the generator they are essentially optimising a quite loose approximation to a likely loose bound on the data log likelihood.


Hacking My Pandora Data With Unsupervised Learning

#artificialintelligence

This is a two-part series about using machine learning to hack my taste in music. In this first piece, I applied unsupervised learning techniques and tools on Pandora data to analyze songs that I like. The second part, which will be published soon, is about using supervised on Spotify data to predict whether or not I will like a song. If you take a look at my top tracks on Last.FM, you'll notice a smorgasbord of tracks from artists like LCD Soundsytem, Jimi Hendrix, and Kanye West. When I make a playlist, it's not uncommon for me to include some 80's post-disco, 2000s indie rock, and Nigerian or Turkish funk.


[R] "Deep Generative Adversarial Networks for Compressed Sensing Automates MRI", Mardani et al 2017 โ€ข r/MachineLearning

@machinelearnbot

I feel rather uneasy at this application of DCGANs. Yes, we know they are great at hallucinating details and creating single plausible reconstructions of high perceptual detail, so the perceptual ratings do not surprise anyone, but that's not the same thing as making accurate diagnoses, is it?


With machine learning and AI in healthcare, can you speak the language?

#artificialintelligence

As artificial intelligence and machine learning start to make their mark on healthcare in a big way, there's no shortage of hype. But there's also no small amount of uncertainty about just what it all means โ€“ literally. "We haven't settled on how to talk about this yet, and it's creating confusion in the market," said Leonard D'Avolio, assistant professor in the Brigham and Women's Division of General Internal Medicine and Primary Care (part of Harvard Medical School), and CEO of machine learning company Cyft. "If I describe what I do as cognitive computing, but a competitor describes what they do as AI or machine learning or data mining, it's hard to even understand what problems we are trying to solve." Because the problems that can be solved in healthcare with AI are numerous and notable, said Zeeshan Syed, director of the clinical inference and algorithms program at Stanford Health Care โ€“ whether it's better decision support at the bedside, better business intelligence for the C-suite or big-picture challenges such as managing care "across complex networks of providers for complex populations and complex diseases."


Which Machine Learning Algorithm Should I Use?

@machinelearnbot

Hui Li is Principal Staff Scientist, Data Science at SAS. This resource is designed primarily for beginner to intermediate data scientists or analysts who are interested in identifying and applying machine learning algorithms to address the problems of their interest. A typical question asked by a beginner, when facing a wide variety of machine learning algorithms, is "which algorithm should I use?" Even an experienced data scientist cannot tell which algorithm will perform the best before trying different algorithms. We are not advocating a one and done approach, but we do hope to provide some guidance on which algorithms to try first depending on some clear factors.


CycleGAN

@machinelearnbot

Transferring characteristics from one image to another is an exciting proposition. How cool would it be if you could take a photo and convert it into the style of Van Gogh or Picasso! Or maybe you want to put a smile on Agent 42's face with the virally popular Faceapp Relaxation of having one-to-one mapping makes this formulation quite powerful - the same method could be used to tackle a variety of problems by varying the input-output domain pairs - performing artistic style transfer, adding bokeh effect to phone camera photos, creating outline maps from satellite images or convert horses to zebras and vice versa!! This is achieved by a type of generative model, specifically a Generative Adversarial Network dubbed CycleGAN by the authors of this paper. Here are some examples of what CycleGAN can do.


AdaGAN: Boosting Generative Models

arXiv.org Machine Learning

Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.


Why Machine Learning is the new technology breakthrough Letimo

#artificialintelligence

Machine learning is a process where computer algorithms find patterns in data, and then predict probable outcomes of that data. It provides computers with the ability to learn through past data and change when exposed to new data, without additional programming. Machine learning programs build a model from sample inputs and then predict the outputs of those data inputs. Machine learning is a type of artificial intelligence (AI). Artificial Intelligence refers to "smart" machines performing tasks that normally require human intervention, like speech recognition, decision making etc.


Unsupervised Learning and Text Mining of Emotion Terms Using R

#artificialintelligence

Unsupervised learning refers to data science approaches that involve learning without a prior knowledge about the classification of sample data. In Wikipedia, unsupervised learning has been described as "the task of inferring a function to describe hidden structure from'unlabeled' data (a classification of categorization is not included in the observations)". The overarching objectives of this post were to evaluate and understand the co-occurrence and/or co-expression of emotion words in individual letters, and if there were any differential expression profiles /patterns of emotions words among the 40 annual shareholder letters? Differential expression of emotion words was being used to refer to quantitative differences in emotion word frequency counts among letters, as well as qualitative differences in certain emotion words occurring uniquely in some letters but not present in others. This is the second part to a companion post I have on "parsing textual data for emotion terms".