Statistical Learning


Versatile linkage: a family of space-conserving strategies for agglomerative hierarchical clustering

#artificialintelligence

Agglomerative hierarchical clustering can be implemented with several strategies that differ in the way elements of a collection are grouped together to build a hierarchy of clusters. Here we introduce versatile linkage, a new infinite system of agglomerative hierarchical clustering strategies based on generalized means, which go from single linkage to complete linkage, passing through arithmetic average linkage and other clustering methods yet unexplored such as geometric linkage and harmonic linkage. We compare the different clustering strategies in terms of cophenetic correlation, mean absolute error, and also tree balance and space distortion, two new measures proposed to describe hierarchical trees. Unlike the β-flexible clustering system, we show that the versatile linkage family is space-conserving.


7 Simple Tricks to Handle Complex Machine Learning Issues

#artificialintelligence

We propose simple solutions to important problems that all data scientists face almost every day. Many statistics, such as correlations or R-squared, depend on the sample size, making it difficult to compare values computed on two data sets of different sizes. Based on re-sampling techniques, use this easy trick, to compare apples with other apples, not with oranges. We propose a generic methodology, also based on re-sampling techniques, to compute any confidence interval and for testing hypotheses, without using any statistical theory. Also, it is easy to implement, even in Excel.


How to Develop a Face Recognition System Using FaceNet in Keras

#artificialintelligence

Face recognition is a computer vision task of identifying and verifying a person based on a photograph of their face. FaceNet is a face recognition system developed in 2015 by researchers at Google that achieved then state-of-the-art results on a range of face recognition benchmark datasets. The FaceNet system can be used broadly thanks to multiple third-party open source implementations of the model and the availability of pre-trained models. The FaceNet system can be used to extract high-quality features from faces, called face embeddings, that can then be used to train a face identification system. In this tutorial, you will discover how to develop a face detection system using FaceNet and an SVM classifier to identify people from photographs. How to Develop a Face Recognition System Using FaceNet in Keras and an SVM Classifier Photo by Peter Valverde, some rights reserved. Face recognition is the general task of identifying and verifying people from photographs of their face.


7 Steps to Mastering Intermediate Machine Learning with Python -- 2019 Edition

#artificialintelligence

Are you interested in learning more about machine learning with Python? I recently wrote 7 Steps to Mastering Basic Machine Learning with Python -- 2019 Edition, a first step in an attempt to updated a pair of posts I wrote some time back (7 Steps to Mastering Machine Learning With Python and 7 More Steps to Mastering Machine Learning With Python), a pair of posts which are getting stale at this point, having been around for a few years. It's time to add on to the "basic" post with a set of steps for learning "intermediate" level machine learning with Python. We're talking "intermediate" in a relative sense, however, so do not expect to be a research-caliber machine learning engineer after getting through this post. The learning path is aimed at those with some understanding of programming, computer science concepts, and/or machine learning in an abstract sense, who are wanting to be able to use the implementations of machine learning algorithms of the prevalent Python libraries to build their own machine learning models.


7 Steps to Mastering Intermediate Machine Learning with Python -- 2019 Edition

#artificialintelligence

Are you interested in learning more about machine learning with Python? I recently wrote 7 Steps to Mastering Basic Machine Learning with Python -- 2019 Edition, a first step in an attempt to updated a pair of posts I wrote some time back (7 Steps to Mastering Machine Learning With Python and 7 More Steps to Mastering Machine Learning With Python), a pair of posts which are getting stale at this point, having been around for a few years. It's time to add on to the "basic" post with a set of steps for learning "intermediate" level machine learning with Python. We're talking "intermediate" in a relative sense, however, so do not expect to be a research-caliber machine learning engineer after getting through this post. The learning path is aimed at those with some understanding of programming, computer science concepts, and/or machine learning in an abstract sense, who are wanting to be able to use the implementations of machine learning algorithms of the prevalent Python libraries to build their own machine learning models.


True Gradient Descent

#artificialintelligence

In machine learning, the effectiveness of a network is measured by an error function which measures how good a network is at its job. In general, the higher the error, the worse the network is. Conversely, the lower the error, the better the network is. When a machine is trying to learn how to do a task, it tries to make as few mistakes as possible; that is, minimize its error function. True gradient descent is the application of gradient descent to a machine learning network to minimize an error function.


Comparing Classifiers: Decision Trees, K-NN & Naive Bayes

#artificialintelligence

A myriad of options exist for classification. That said, three popular classification methods-- Decision Trees, k-NN & Naive Bayes--can be tweaked for practically every situation. Naive Bayes and K-NN, are both examples of supervised learning (where the data comes already labeled). Decision trees are easy to use for small amounts of classes. If you're trying to decide between the three, your best option is to take all three for a test drive on your data, and see which produces the best results.


Free Book: Foundations of Data Science (from Microsoft Research Lab)

#artificialintelligence

Computer science as an academic discipline began in the 1960s. Emphasis was on programming languages, compilers, operating systems, and the mathematical theory that supported these areas. Courses in theoretical computer science covered finite automata, regular expressions, context-free languages, and computability. In the 1970s, the study of algorithms was added as an important component of theory. The emphasis was on making computers useful.


Distilling BERT -- How to achieve BERT performance using logistic regression

#artificialintelligence

BERT is awesome, and it's everywhere. It looks like any NLP task can benefit from utilizing BERT. The authors showed that this is indeed the case, and from my experience, it works like magic. It's easy to use, works on a small amount of data and supports many different languages. It seems like there's no single reason not to use it everywhere.


Proof-of-concept system uses smart speakers to catch signs of cardiac arrest

#artificialintelligence

In an effort to tackle in-home cardiac arrest, University of Washington researchers have devised a novel contactless system that uses smartphones or voice-based personal assistants to identify telltale breathing patterns that accompany an attack. The proof-of-concept strategy, described in an NPJ Digital Medicine paper published this morning, involved a supervised machine learning model called a support-vector machine that was trained for use in the bedroom, a controlled environment in which the majority of in-home cardiac arrests occur. "Sometimes reported as'gasping' breaths, agonal respirations may hold potential as an audible diagnostic biomarker, particularly in unwitnessed cardiac arrests that occur in a private residence, the location of [two-thirds] of all [out-of-hospital cardiac arrests]," the researchers wrote. "The widespread adoption of smartphones and smart speakers (projected to be in 75% of US households by 2020) presents a unique opportunity to identify this audible biomarker and connect unwitnessed cardiac arrest victims to emergency medical services (EMS) or others who can administer cardiopulmonary resuscitation." Cross-validation analysis of the trained classifier yielded an overall sensitivity and specificity of 97.24% and 99.51%.