Learning Graphical Models


Top 10 Machine Learning Algorithms

#artificialintelligence

This was the subject of a question asked on Quora: What are the top 10 data mining or machine learning algorithms? Some modern algorithms such as collaborative filtering, recommendation engine, segmentation, or attribution modeling, are missing from the lists below. Algorithms from graph theory (to find the shortest path in a graph, or to detect connected components), from operations research (the simplex, to optimize the supply chain), or from time series, are not listed either. And I could not find MCM (Markov Chain Monte Carlo) and related algorithms used to process hierarchical, spatio-temporal and other Bayesian models. My point of view is of course biased, but I would like to also add some algorithms developed or re-developed at the Data Science Central's research lab: These algorithms are described in the article What you wont learn in statistics classes.


List of supervised and unsupervised Machine Learning Algorithms

#artificialintelligence

There are a lot of machine learning algorithm available and among them there are some which are used mostly among the use in daily cases. Some of the Machine Learning algorithm are mentioned.


Artificial Intelligence vs. Machine Learning vs. Deep Learning

#artificialintelligence

Now that we now better understand what Artificial Intelligence means we can take a closer look at Machine Learning and Deep Learning and make a clearer distinguishment between these two. Machine Learning incorporates " classical" algorithms for various kinds of tasks such as clustering, regression or classification. Machine Learning algorithms must be trained on data. The more data you provide to your algorithm, the better it gets. The "training" part of a Machine Learning model means that this model tries to optimize along a certain dimension.


Balancing Act in Datasets of a Machine Learning algorithm

#artificialintelligence

When dealing with imbalanced classes, we may need to do some extra work and planning to make sure that our algorithms give us useful results. In this blog, I examine just two classification techniques to illustrate the issue, but you should know that the problem generalizes. For good reason, supervised classification algorithms -- which use labeled data -- take class distributions into account. However, when we're trying to detect classes that are important, but rare compared to the alternatives, it can be difficult to develop a model that catches them. Here, after diving into the problem with some examples, I outline a few of the tried and true techniques for solving it.



A Gentle Introduction to Monte Carlo Sampling for Probability

#artificialintelligence

Monte Carlo methods are a class of techniques for randomly sampling a probability distribution. There are many problem domains where describing or estimating the probability distribution is relatively straightforward, but calculating a desired quantity is intractable. This may be due to many reasons, such as the stochastic nature of the domain or an exponential number of random variables. Instead, a desired quantity can be approximated by using random sampling, referred to as Monte Carlo methods. These methods were initially used around the time that the first computers were created and remain pervasive through all fields of science and engineering, including artificial intelligence and machine learning.


16. Appendix: Mathematics for Deep Learning -- Dive into Deep Learning 0.7 documentation

#artificialintelligence

One of the wonderful parts of modern deep learning is the fact that much of it can be understood and used without a full understanding of the mathematics below it. This is a sign of the fact that the field is becoming more mature. Most software developers no longer need to worry about the theory of computable functions, or if programming languages without a goto can emulate programming languages with a goto with at most constant overhead, and neither should the deep learning practitioner need to worry about the theoretical foundations maximum likelihood learning, if one can find an architecture to approximate a target function to an arbitrary degree of accuracy. That said, we are not quite there yet. Sometimes when building a model in practice you will need to understand how architectural choices influence gradient flow, or what assumptions you are making by training with a certain loss function.


16. Appendix: Mathematics for Deep Learning -- Dive into Deep Learning 0.7 documentation

#artificialintelligence

One of the wonderful parts of modern deep learning is the fact that much of it can be understood and used without a full understanding of the mathematics below it. This is a sign of the fact that the field is becoming more mature. Most software developers no longer need to worry about the theory of computable functions, or if programming languages without a goto can emulate programming languages with a goto with at most constant overhead, and neither should the deep learning practitioner need to worry about the theoretical foundations maximum likelihood learning, if one can find an architecture to approximate a target function to an arbitrary degree of accuracy. That said, we are not quite there yet. Sometimes when building a model in practice you will need to understand how architectural choices influence gradient flow, or what assumptions you are making by training with a certain loss function.


How Bayes' Theorem is Applied in Machine Learning - KDnuggets

#artificialintelligence

In the previous post we saw what Bayes' Theorem is, and went through an easy, intuitive example of how it works. You can find this post here. If you don't know what Bayes' Theorem is, and you have not had the pleasure to read it yet, I recommend you do, as it will make understanding this present article a lot easier. In this post, we will see the uses of this theorem in Machine Learning. As mentioned in the previous post, Bayes' theorem tells use how to gradually update our knowledge on something as we get more evidence or that about that something.


Probabilistic Model Selection with AIC, BIC, and MDL

#artificialintelligence

Model selection is the problem of choosing one from among a set of candidate models. It is common to choose a model that performs the best on a hold-out test dataset or to estimate model performance using a resampling technique, such as k-fold cross-validation. An alternative approach to model selection involves using probabilistic statistical measures that attempt to quantify both the model performance on the training dataset and the complexity of the model. Examples include the Akaike and Bayesian Information Criterion and the Minimum Description Length. The benefit of these information criterion statistics is that they do not require a hold-out test set, although a limitation is that they do not take the uncertainty of the models into account and may end-up selecting models that are too simple.