Goto

Collaborating Authors

Regression


How to Choose a Machine Learning Technique

#artificialintelligence

Why are there so many machine learning techniques? The thing is that different algorithms solve various problems. The results that you get directly depend on the model you choose. That is why it is so important to know how to match a machine learning algorithm to a particular problem. In this post, we are going to talk about just that. First of all, to choose an algorithm for your project, you need to know about what kinds of them exist.


Writing your first hello world in machine learning with Python

#artificialintelligence

Machine Learning is the process of taking data as input, identifying the trends and patterns in data and giving a program as output. This program also or model is the representation of the patterns that define the data. New data can be given as input to this model/program and it will be able to classify it or make predictions on it. Python has been found to be a simple and easy to learn language that works very well with requirements of machine learning. Python has been designed to favor data analysis.


Inside: Logistic Regression

#artificialintelligence

This is a part of a series of blogs where I'll be demonstrating different aspects and the theory of Machine Learning Algorithms by using math and code. This includes the usual modeling structure of the algorithm and the intuition on why and how it works, using Python code. Logistic Regression is one of the first algorithms that is introduced when someone learns about classification. You probably would have read about Regression and the continuous nature of the predictor variable. Classification is done on discrete variables, which means your predictions are finite and class-based like a Yes/No, True/False for binary outcomes.


Logistic Regression from Scratch with Only Python Code

#artificialintelligence

In this article, we will build a logistic regression model for classifying whether a patient has diabetes or not. The main focus here is that we will only use python to build functions for reading the file, normalizing data, optimizing parameters, and more. So you will be getting in-depth knowledge of how everything from reading the file to make predictions works. If you are new to machine learning, or not familiar with logistic regression or gradient descent, don't worry I'll try my best to explain these in layman's terms. There are more tutorials out there that explain the same concepts.


Machine Learning Basics: Polynomial Regression

#artificialintelligence

Learn to build a Polynomial Regression model to predict the values for a non-linear dataset. In this article, we will go through the program for building a Polynomial Regression model based on the non-linear data. In the previous examples of Linear Regression, when the data is plotted on the graph, there was a linear relationship between both the dependent and independent variables. Thus, it was more suitable to build a linear model to get accurate predictions. What if the data points had the following non-linearity making the linear model giving an error in predictions due to non-linearity? In this case, we have to build a polynomial relationship which will accurately fit the data points in the given plot.


Regression with PyCaret: A better machine learning library

#artificialintelligence

I assume you already know what regression is. "Regression is a statistical method used in finance, investing, and other disciplines that attempts to determine the strength and character of the relationship between one dependent variable (usually denoted by Y) and a series of other variables (known as independent variables)." In the most simple terms -- we want to fit a line (or hyperplane) through data points to obtain a line of best fit. The algorithm behind aims to find the line which minimizes the cost function -- typically MSE or RMSE. That's linear regression, but there are other types -- like polynomial regression.


scikit-learn Multiple Linear Regression with

#artificialintelligence

Multiple Linear Regression with scikit-learn In this article, we studied the most fundamental machine learning algorithms i.e. linear regression. We implemented both simple linear regression and multiple linear regression with the help of the Scikit-Learn machine learning library. I hope you guys have enjoyed the reading. Let me know your doubts/suggestions in the comment section. In this 2-hour long project-based course, you will build and evaluate multiple linear regression models using Python.


Feature Selection in Machine Learning

#artificialintelligence

In the real world, data is not as clean as it's often assumed to be. That's where all the data mining and wrangling comes in; to build insights out of the data that has been structured using queries, and now probably contains certain missing values, and exhibits possible patterns that are unseen to the naked eye. That's where Machine Learning comes in: To check for patterns and make use of those patterns to predict outcomes using these newly understood relationships in the data. For one to understand the depth of the algorithm, one needs to read through the variables in the data, and what those variables represent. Understanding this is important, because when you need to prove your outcomes, based on your understanding of the data.


How to Use Feature Extraction on Tabular Data for Machine Learning

#artificialintelligence

Machine learning predictive modeling performance is only as good as your data, and your data is only as good as the way you prepare it for modeling. The most common approach to data preparation is to study a dataset and review the expectations of a machine learning algorithm, then carefully choose the most appropriate data preparation techniques to transform the raw data to best meet the expectations of the algorithm. This is slow, expensive, and requires a vast amount of expertise. An alternative approach to data preparation is to apply a suite of common and commonly useful data preparation techniques to the raw data in parallel and combine the results of all of the transforms together into a single large dataset from which a model can be fit and evaluated. This is an alternative philosophy for data preparation that treats data transforms as an approach to extract salient features from raw data to expose the structure of the problem to the learning algorithms.


How to Use Feature Extraction on Tabular Data for Machine Learning - AnalyticsWeek

#artificialintelligence

Machine learning predictive modeling performance is only as good as your data, and your data is only as good as the way you prepare it for modeling. The most common approach to data preparation is to study a dataset and review the expectations of a machine learning algorithm, then carefully choose the most appropriate data preparation techniques to transform the raw data to best meet the expectations of the algorithm. This is slow, expensive, and requires a vast amount of expertise. An alternative approach to data preparation is to apply a suite of common and commonly useful data preparation techniques to the raw data in parallel and combine the results of all of the transforms together into a single large dataset from which a model can be fit and evaluated. This is an alternative philosophy for data preparation that treats data transforms as an approach to extract salient features from raw data to expose the structure of the problem to the learning algorithms.