Uncertainty


AI that can shoot down fighter planes helps treat bipolar disorder: Engineering and medical researchers apply genetic fuzzy logic successfully to predict treatment outcomes for bipolar patients

#artificialintelligence

David Fleck, an associate professor at the UC College of Medicine, and his co-authors used artificial intelligence called "genetic fuzzy trees" to predict how bipolar patients would respond to lithium. The study authors found that even the best of eight common models used in treating bipolar disorder predicted who would respond to lithium treatment with 75 percent accuracy. But the model UC researchers developed using AI predicted how patients would respond to lithium treatment with 88 percent accuracy and 80 percent accuracy in validation. It turns out that the same kind of artificial intelligence that outmaneuvered Air Force pilots last year in simulation after simulation at Wright-Patterson Air Force Base is equally adept at making beneficial decisions that can help doctors treat disease.


Numbers war: How Bayesian vs frequentist statistics influence AI

#artificialintelligence

In other words, infected people test positive 99 per cent of the time and healthy people test negative 99 per cent of the time. We also need a figure for the prevalence of the infection in the population; if we don't know it, we can start by guessing that half of the population is infected and half is healthy. But this line of reasoning ignores the fact that 1 per cent of the healthy people will test positive and, as the proportion of healthy people increases, the number of those healthy people who test as positive begins to overwhelm those who are infected and also test positive. In slightly more formal terms we would say that the number of false positives (healthy people being misdiagnosed) begins to overwhelm the true positives (infected people testing positive).



Bayesian Basics, Explained

@machinelearnbot

Andrew Gelman: Bayesian statistics uses the mathematical rules of probability to combines data with "prior information" to give inferences which (if the model being used is correct) are more precise than would be obtained by either source of information alone. You can reproduce the classical methods using Bayesian inference: In a regression prediction context, setting the prior of a coefficient to uniform or "noninformative" is mathematically equivalent to including the corresponding predictor in a least squares or maximum likelihood estimate; setting the prior to a spike at zero is the same as excluding the predictor, and you can reproduce a pooling of predictors thorough a joint deterministic prior on their coefficients. When Bayesian methods work best, it's by providing a clear set of paths connecting data, mathematical/statistical models, and the substantive theory of the variation and comparison of interest. Bayesian methods offer a clarity that comes from the explicit specification of a so-called "generative model": a probability model of the data-collection process and a probability model of the underlying parameters.


will wolf

#artificialintelligence

Edward is a probabilistic programming library that bridges this gap: "black-box" variational inference enables us to fit extremely flexible Bayesian models to large-scale data. To "pull us down the path," we build three models in additive fashion: a Bayesian linear regression model, a Bayesian linear regression model with random effects, and a neural network with random effects. To infer posterior distributions of the model's parameters conditional on the data observed we employ variational inference -- one of three inference classes supported in Edward. Thus far, we've been approximating the relationship between our fixed effects and response variable with a simple dot product; can we leverage Keras to make this relationship more expressive?


The Perceptron Algorithm explained with Python code

@machinelearnbot

To do this, we can train a Classifier with a'training dataset' and after such a Classifier is trained (we have determined its model parameters) and can accurately classify the training set, we can use it to classify new data (test set). Logistic Regression uses a functional approach to classify data, and the Naive Bayes classifier uses a statistical (Bayesian) approach to classify data. Classifiers which are using a geometrical approach are the Perceptron and the SVM (Support Vector Machines) methods. Although Support Vector Machines is used more often, I think a good understanding of the Perceptron algorithm is essential to understanding Support Vector Machines and Neural Networks.


ml-notes-why-the-log-likelihood-24f7b6c40f83?utm_content=buffer04f75&gi=96bc516d2eed

@machinelearnbot

Secretly, you are hoping that your model will predict future experiences, people call that "generalisation". If we had a sum instead of a product, we could load one datum at a time compute its partial derivatives, accumulating those gradients and apply the optimisation at the end. This little term is what people call the regularisation term, it takes into account your "prior" knowledge of the problem. Notice how engineering problems pushed us to find better notations or better optimisation procedures, surprisingly in machine learning, the basic probability theories are often not that complicated to grasp but the engineering feat to make them actually work are insane.


ML notes: Why the log-likelihood? – metaflow-ai

#artificialintelligence

Secretly, you are hoping that your model will predict future experiences, people call that "generalisation". If we had a sum instead of a product, we could load one datum at a time compute its partial derivatives, accumulating those gradients and apply the optimisation at the end. This little term is what people call the regularisation term, it takes into account your "prior" knowledge of the problem. Notice how engineering problems pushed us to find better notations or better optimisation procedures, surprisingly in machine learning, the basic probability theories are often not that complicated to grasp but the engineering feat to make them actually work are insane.


CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers

#artificialintelligence

Chapter 1: Introduction to Bayesian Methods Introduction to the philosophy and practice of Bayesian methods and answering the question, "What is probabilistic programming?" Chapter 2: A little more on PyMC We explore modeling Bayesian problems using Python's PyMC library through examples. Chapter 1: Introduction to Bayesian Methods Introduction to the philosophy and practice of Bayesian methods and answering the question, "What is probabilistic programming?" Chapter 2: A little more on PyMC We explore modeling Bayesian problems using Python's PyMC library through examples.


Introduction to Machine Learning & Face Detection in Python

#artificialintelligence

This course is about the fundamental concepts of machine learning, focusing on neural networks, SVM and decision trees. These topics are getting very hot nowadays because these learning algorithms can be used in several fields from software engineering to investment banking. Learning algorithms can recognize patterns which can help detect cancer for example or we may construct algorithms that can have a very very good guess about stock prices movement in the market. We will talk about Naive Bayes classification and tree based algorithms such as decision trees and random forests.