Statistical Learning


Data Cleaning and Preprocessing

#artificialintelligence

Data preprocessing involves the transformation of the raw dataset into an understandable format. Preprocessing data is a fundamental stage in data mining to improve data efficiency. The data preprocessing methods directly affect the outcomes of any analytic algorithm. Data is raw information, its the representation of both human and machine observation of the world. Dataset entirely depends on what type of problem you want to solve.


Testing Your Machine Learning Pipelines - KDnuggets

#artificialintelligence

When it comes to data products, a lot of the time there is a misconception that these cannot be put through automated testing. Although some parts of the pipeline can not go through traditional testing methodologies due to their experimental and stochastic nature, most of the pipeline can. In addition to this, the more unpredictable algorithms can be put through specialised validation processes. Let's take a look at traditional testing methodologies and how we can apply these to our data/ML pipelines. This pyramid is a representation of the types of tests that you would write for an application.


Chi-Squared For Feature Selection using SelectKBest

#artificialintelligence

In this video, I'll show you how SelectKBest uses Chi-squared test for feature selection for categorical features & target columns. We calculate Chi-square between each feature & the target & select the desired number of features with best Chi-square scores or the lowest p-values. The Chi-squared (χ2) test is used in statistics to test the independence of two events. More specifically in feature selection we use it to test whether the occurrence of a specific feature & the target are independent or not. For each feature & target combination, a corresponding high χ2 chi-square score or a low p-value indicates that the target column is dependent on the feature column.


Supervised vs Unsupervised Learning

#artificialintelligence

In machine learning, most tasks can be easily categorized into one of two different classes: supervised learning problems or unsupervised learning problems. In supervised learning, data has labels or classes appended to it, while in the case of unsupervised learning the data is unlabeled. Let's take a close look at why this distinction is important and look at some of the algorithms associated with each type of learning. Most machine learning tasks are in the domain of supervised learning. In supervised learning algorithms, the individual instances/data points in the dataset have a class or label assigned to them.


Machine Learning vs. Statistics - Silicon Valley Data Science

#artificialintelligence

Throughout its history, Machine Learning (ML) has coexisted with Statistics uneasily, like an ex-boyfriend accidentally seated with the groom's family at a wedding reception: both uncertain where to lead the conversation, but painfully aware of the potential for awkwardness. This is caused in part by the fact that Machine Learning has adopted many of Statistics' methods, but was never intended to replace statistics, or even to have a statistical basis originally. Nevertheless, Statisticians and ML practitioners have often ended up working together, or working on similar tasks, and wondering what each was about. The question, "What's the difference between Machine Learning and Statistics?" has been asked now for decades. Machine Learning is largely a hybrid field, taking its inspiration and techniques from all manner of sources. It has changed directions throughout its history and often seemed like an enigma to those outside of it.1


An easy guide to choose the right Machine Learning algorithm for your task

#artificialintelligence

Well, there is no straightforward and sure-shot answer to this question. The answer depends on many factors like the problem statement and the kind of output you want, type and size of the data, the available computational time, number of features and observations in the data, to name a few. It is usually recommended to gather a good amount of data to get reliable predictions. However, many a time the availability of data is a constraint. So, if the training data is smaller or if the dataset has a fewer number of observations and a higher number of features like genetics or textual data, choose algorithms with high bias/low variance like Linear regression, Naïve Bayes, Linear SVM.


How to Build Machine Learning Pipelines using PySpark

#artificialintelligence

It's rare when we get a dataset without any missing values. Can you remember the last time that happened? It is important to check the number of missing values present in all the columns. Knowing the count helps us treat the missing values before building any machine learning model using that data.


Time Series Analysis: Looking Back to See the Future

#artificialintelligence

Whether on-premises or in the cloud, your data provides a link to the past and a glimpse into the future. Why did you lose past customers? Which current customers should you pay more attention to? Where are your new customers going to come from? In this blog, I'll talk about the capabilities of Teradata Vantage and its Machine Learning Engine, and how it can help you turn 100% of your data into answers that your business can use to pave a path to the future.


DSC Webinar Series: Forecasting Using TensorFlow and FB's Prophet

#artificialintelligence

We live in a time where we are able to monitor everything--servers, containers, fitness levels, power consumption, etc. Making predictions on time series data is often just as important as monitoring is. In this latest Data Science Central webinar, we will learn about how InfluxDB can be used with TensorFlow and FB's Prophet to make predictions and solve data engineering problems.