OBSERVATION


How Machines Make Sense of Big Data: an Introduction to Clustering Algorithms

#artificialintelligence

Repeat Step 1 by recalculating the distance matrix, but this time merge the Bottlenose & Risso's Dolphins into a single object with length 3.3m. Then, we repeat Step 1 -- recalculate the distance matrix, but now we've merged the Pilot & Killer Whales into a single object of length 7.0m. Then, we repeat Step 1 and compute a new distance matrix, having merged the Bottlenose & Risso's Dolphins with the Pilot & Killer Whales. Then, it's back to Step 1 -- compute the distance matrix, having merged the Humpback & Fin Whales.


A Practical Guide to Tree Based Learning Algorithms

#artificialintelligence

Common examples of tree based models are: decision trees, random forest, and boosted trees. CART model involves selecting input variables and split points on those variables until a suitable tree is constructed. In order to perform recursive binary splitting, first select the predictor $X_j$ and the cut point $s$ such that splitting the predictor space into the regions (half planes) \(R_1(j,s) \big\{ X Xj s \big\}\) and \(R_2(j,s) \big\{ X Xj \ge s \big\}\) leads to the greatest possible reduction in RSS. Just as in the regression setting, recursive binary splitting is used to grow a classification tree.


From our archives: Chasing eclipses in the 19th and 20th century

Popular Science

In 1905, three naval ships took an American expedition to Spain to view an eclipse, where astronomers set up an entire camp complete with a telegraph to make detailed observations. The author, noted astronomer Samuel Alfred Mitchell, describes the expedition in detail, including asides about the number of rounds of ammo used in diplomatic salutes in Gibraltar (152), bullfighting, and how friendly the Spaniards were--even as the American delegation mangled their language. By June of 1918, researchers had learned more about the Sun's corona. During this eclipse, which also cut across a large swath of the United States, researchers hoped to observe flickering shadows that had been reported but not captured on film.


What Is Data Science?

#artificialintelligence

Data science is part of a tiered collection of related technologies -- Big Data is facilitated by data science, which in turn is facilitated by machine learning. Data science is a confluence of disciplines including but not limited to computer science, mathematical statistics, probability theory, machine learning, software engineering, distributed computer architectures, and data visualization. Instead of using labeled data sets, unsupervised learning is a set of statistical tools intended for applications where there are no labels (response variables). As a facilitator of so-called Big Data, data science possesses the technologies, in particular machine learning, required to embrace the demands of growing data sets.


The Fundamental Statistics Theorem Revisited

@machinelearnbot

It turned out that putting more weight on close neighbors, and increasingly lower weight on far away neighbors (with weights slowly decaying to zero based on the distance to the neighbor in question) was the solution to the problem. For those interested in the theory, the fact that cases 1, 2 and 3 yield convergence to the Gaussian distribution is a consequence of the Central Limit Theorem under the Liapounov condition. More specifically, and because the samples produced here come from uniformly bounded distributions (we use a random number generator to simulate uniform deviates), all that is needed for convergence to the Gaussian distribution is that the sum of the squares of the weights -- and thus Stdev(S) as n tends to infinity -- must be infinite. More generally, we can work with more complex auto-regressive processes with a covariance matrix as general as possible, then compute S as a weighted sum of the X(k)'s, and find a relationship between the weights and the covariance matrix, to eventually identify conditions on the covariance matrix that guarantee convergence to the Gaussian destribution.


Introduction to Machine Learning for Developers

#artificialintelligence

The two main types of machine learning algorithms are supervised and unsupervised learning. There are many types of supervised algorithms available, one of the most popular ones is the Naive Bayes model which is often a good starting point for developers since it's fairly easy to understand the underlying probabilistic model and easy to execute. Decision trees are also a predictive model and have two types of trees: regression (which take continuous values) and classification models (which take finite values) and use a divide and conquer strategy that recursively separates the data to generate the tree. Check out the rest of the blog for more resources on natural language processing and machine learning algorithms such as LDA for text classification or increasing the accuracy on a Nudity Detection algorithm and a beginners tutorial on using Scikit-learn to solve FizzBuzz.


How to Handle Imbalanced Classes in Machine Learning

#artificialintelligence

This is a surprisingly common problem in machine learning (specifically in classification), occurring in datasets with a disproportionate ratio of observations in each class. Let's train another model using Logistic Regression, this time on the balanced dataset: Great, now the model is no longer predicting just one class. In modern applied machine learning, tree ensembles (Random Forests, Gradient Boosted Trees, etc.) While these results are encouraging, the model could be overfit, so you should still evaluate your model on an unseen test set before making the final decision.


The Best Metric to Measure Accuracy of Classification Models

#artificialintelligence

To understand the implication of translating the probability number, let's understand few basic concepts relating to evaluating a classification model with the help of an example given below. Since we are now comfortable with the interpretation of the Confusion Matrix, let's look at some popular metrics used for testing the classification models: Since the formula doesn't contain FP and TN, Sensitivity may give you a biased result, especially for imbalanced classes. In the example of Fraud detection, it gives you the percentage of Correctly Predicted Frauds from the pool of Actual Frauds. In the example of Fraud detection, it gives you the percentage of Correctly Predicted Frauds from the pool of Total Predicted Frauds.


The Confluence of Geometry and Learning

#artificialintelligence

Building computational models for single image 3D inference is a long-standing problem in computer vision. Instead, akin to the human visual system, we want our computational systems to learn 3D prediction without requiring 3D supervision. The first one leverages classical ray consistency formulations to introduce a generic Verifier which can measure consistency between a 3D shape and diverse kinds of observations $O$. In another recent work, we show that the pose requirement can be relaxed, and in fact jointly learned with the single image 3D predictor $P$.


The Confluence of Geometry and Learning

#artificialintelligence

Building computational models for single image 3D inference is a long-standing problem in computer vision. Instead, akin to the human visual system, we want our computational systems to learn 3D prediction without requiring 3D supervision. The first one leverages classical ray consistency formulations to introduce a generic Verifier which can measure consistency between a 3D shape and diverse kinds of observations $O$. In another recent work, we show that the pose requirement can be relaxed, and in fact jointly learned with the single image 3D predictor $P$.