Goto

Collaborating Authors

How to Develop a Naive Bayes Classifier from Scratch in Python

#artificialintelligence

Classification is a predictive modeling problem that involves assigning a label to a given input data sample. The problem of classification predictive modeling can be framed as calculating the conditional probability of a class label given a data sample. Bayes Theorem provides a principled way for calculating this conditional probability, although in practice requires an enormous number of samples (very large-sized dataset) and is computationally expensive. Instead, the calculation of Bayes Theorem can be simplified by making some assumptions, such as each input variable is independent of all other input variables. Although a dramatic and unrealistic assumption, this has the effect of making the calculations of the conditional probability tractable and results in an effective classification model referred to as Naive Bayes.


A Gentle Introduction to the Bayes Optimal Classifier

#artificialintelligence

Because the Bayes classifier is optimal, the Bayes error is the minimum possible error that can be made. Further, the model is often described in terms of classification, e.g. the Bayes Classifier. Nevertheless, the principle applies just as well to regression: that is, predictive modeling problems where a numerical value is predicted instead of a class label. It is a theoretical model, but it is held up as an ideal that we may wish to pursue. In theory we would always like to predict qualitative responses using the Bayes classifier. But for real data, we do not know the conditional distribution of Y given X, and so computing the Bayes classifier is impossible. Therefore, the Bayes classifier serves as an unattainable gold standard against which to compare other methods.


A Gentle Introduction to Maximum Likelihood Estimation for Machine Learning

#artificialintelligence

Density estimation is the problem of estimating the probability distribution for a sample of observations from a problem domain. There are many techniques for solving density estimation, although a common framework used throughout the field of machine learning is maximum likelihood estimation. Maximum likelihood estimation involves defining a likelihood function for calculating the conditional probability of observing the data sample given a probability distribution and distribution parameters. This approach can be used to search a space of possible distributions and parameters. This flexible probabilistic framework also provides the foundation for many machine learning algorithms, including important methods such as linear regression and logistic regression for predicting numeric values and class labels respectively, but also more generally for deep learning artificial neural networks.


Probability for Machine Learning (7-Day Mini-Course)

#artificialintelligence

Probability is a field of mathematics that is universally agreed to be the bedrock for machine learning. Although probability is a large field with many esoteric theories and findings, the nuts and bolts, tools and notations taken from the field are required for machine learning practitioners. With a solid foundation of what probability is, it is possible to focus on just the good or relevant parts. In this crash course, you will discover how you can get started and confidently understand and implement probabilistic methods used in machine learning with Python in seven days. This is a big and important post. You might want to bookmark it. Probability for Machine Learning (7-Day Mini-Course) Photo by Percita, some rights reserved.


Naive Bayes for Machine Learning - Machine Learning Mastery

#artificialintelligence

Naive Bayes is a simple but surprisingly powerful algorithm for predictive modeling. In this post you will discover the Naive Bayes algorithm for classification. This post is written for developers and does not assume any background in statistics or probability, although knowing a little probability wouldn't hurt. Naive Bayes for Machine Learning Photo by John Morgan, some rights reserved. In machine learning we are often interested in selecting the best hypothesis (h) given data (d).