Uncertainty


The amazing predictive power of conditional probability in Bayes Nets

@machinelearnbot

Using conditional probability gives Bayes Nets strong analytical advantages over traditional regression-based models. This adds to several advantages we discussed in an earlier article. But what is conditional probability and what makes it different? In short, conditional probability means that the effects of one variable depend on, of flow from, the distribution of another variable (or others). The complete state of one variable determines how another acts.


A primer on universal function approximation with deep learning (in Torch and R)

@machinelearnbot

Arthur C. Clarke famously stated that "any sufficiently advanced technology is indistinguishable from magic." No current technology embodies this statement more than neural networks and deep learning. And like any good magic it not only dazzles and inspires but also puts fear into people's hearts. One known property of artificial neural networks (ANNs) is that they are universal function approximators. This means that any mathematical function can be represented by a neural network.


How Bayesian Networks Are Superior in Understanding Effects of Variables

@machinelearnbot

Bayes Nets (or Bayesian Networks) give remarkable results in determining the effects of many variables on an outcome. They typically perform strongly even in cases when other methods falter or fail. These networks have had relatively little use with business-related problems, although they have worked successfully for years in fields such as scientific research, public safety, aircraft guidance systems and national defense. Importantly, they often outperform regression, particularly in determining variables' effects. Regression is one of the most august multivariate methods, and among the most studied and applied.


andrewgordonwilson/bayesgan

@machinelearnbot

This repository contains the Tensorflow implementation of the Bayesian GAN by Yunus Saatchi and Andrew Gordon Wilson. This paper will be appearing at NIPS 2017. In the Bayesian GAN we propose conditional posteriors for the generator and discriminator weights, and marginalize these posteriors through stochastic gradient Hamiltonian Monte Carlo. Key properties of the Bayesian approach to GANs include (1) accurate predictions on semi-supervised learning problems; (2) minimal intervention for good performance; (3) a probabilistic formulation for inference in response to adversarial feedback; (4) avoidance of mode collapse; and (5) a representation of multiple complementary generative and discriminative models for data, forming a probabilistic ensemble. We illustrate a multimodal posterior over the parameters of the generator.


Related Datasets in Oracle DV Machine Learning models

#artificialintelligence

In this blog we dicuss Related datasets produced by Machine Learning algorithms in Oracle Data Visualization. Related datasets are generated when we Train/Create a Machine learning model in Oracle DV (present in 12.2.4.0 onwards, called V4 in short). These datasets contain details about the model like: Prediction rules, Accuracy metrics, Confusion Matrix, Key Drivers for prediction etc depending on the type of algorithm. Related datasets can be found in inspect model menu: Inspect Model - Related tab. These datasets are useful in more ways than one.


Naive Bayes in Machine Learning – Towards Data Science

@machinelearnbot

Bayes' theorem finds many uses in the probability theory and statistics. There's a micro chance that you have never heard about this theorem in your life. Turns out that this theorem has found its way into the world of machine learning, to form one of the highly decorated algorithms. In this article, we will learn all about the Naive Bayes Algorithm, along with its variations for different purposes in machine learning. As you might have guessed, this requires us to view things from a probabilistic point of view.


Why Probability Theory Should be Thrown Under the Bus

@machinelearnbot

So, what's Yann LeCun talking about when he says "he's ready to throw Probability Theory under the bus"? This article attempts to explore this sentiment. The problem with Probability Theory has to do with its efficacy in making predictions. It's obvious that the distributions are different, unfortunately the statistical measures are identical! Said differently, if the basis of your predictions are expectations calculated from probability distributions, then you can very easily be fooled.


Algorithms Identify People with Suicidal Thoughts

IEEE Spectrum Robotics Channel

Mention strong words such as "death" or "praise" to someone who has suicidal thoughts and chances are the neurons in their brains activate in a totally different pattern than those of a non-suicidal person. That's what researchers at University of Pittsburgh and Carnegie Mellon University discovered, and trained algorithms to distinguish, using data from fMRI brain scans. The scientists published the findings of their small-scale study Monday in the journal Nature Human Behaviour. They hope to study a larger group of people and use the data to develop simple tests that doctors can use to more readily identify people at risk of suicide. Suicide is the second-leading cause of death among young adults, according to the U.S. Centers for Disease Control and Prevention.


AI - The present in the making

#artificialintelligence

For many people, the concept of Artificial Intelligence (AI) is a thing of the future. It is the technology that has yet to be introduced. But Professor Jon Oberlander disagrees. He was quick to point out that AI is not in the future, it is now in the making. He began by mentioning Alexa, Amazon's star product.


Bayesian Reasoning and Machine Learning: David Barber: 8601400496688: Amazon.com: Books

#artificialintelligence

"With approachable text, examples, exercises, guidelines for teachers, a MATLAB toolbox and an accompanying web site, Bayesian Reasoning and Machine Learning by David Barber provides everything needed for your machine learning course. Jaakko Hollmén, Aalto University "Barber has done a commendable job in presenting important concepts in probabilistic modeling and probabilistic aspects of machine learning. The chapters on graphical models form one of the clearest and most concise presentations I have seen. The book has wide coverage of probabilistic machine learning, including discrete graphical models, Markov decision processes, latent variable models, Gaussian process, stochastic and deterministic inference, among others. The material is excellent for advanced undergraduate or introductory graduate course in graphical models, or probabilistic machine learning.