Goto

Collaborating Authors

Statistical Learning


Clustering in Python

#artificialintelligence

This article was published as a part of the Data Science Blogathon. Cluster analysis or clustering is an unsupervised machine learning algorithm that groups unlabeled datasets. It aims to form clusters or groups using the data points in a dataset in such a way that there is high intra-cluster similarity and low inter-cluster similarity. In, layman terms clustering aims at forming subsets or groups within a dataset consisting of data points which are really similar to each other and the groups or subsets or clusters formed can be significantly differentiated from each other. Let's assume we have a dataset and we don't know anything about it.


Top 5 Best Articles on R for Business [November 2020]

#artificialintelligence

Register for our blog to get the Top Articles every month. Making multiple ARIMA Time Series models in R used to be difficult. But, with the purrr nest() function and modeltime, forecasting has never been easier. Learn how to make many ARIMA models in this tutorial.This article was written by Business Science's very own Matt Dancho. In this post, we load our cleaned up big MT Cars data set in order to be able to refer directly to the variable without a short code or the f function from our datatable.


Measure Bias and Variance Using Various Machine Learning Models

#artificialintelligence

This article was published as a part of the Data Science Blogathon. One of the most used matrices for measuring model performance is predictive errors. The components of any predictive errors are Noise, Bias, and Variance. This article intends to measure the bias and variance of a given model and observe the behavior of bias and variance w.r.t various models such as Linear Regression, Decision Tree, Bagging, and Random Forest for a given number of sample sizes. Bias: Difference between the prediction of the true model and the average models (models build on n number of samples obtained from the population).


Deep Learning Prerequisites: Logistic Regression in Python

#artificialintelligence

Deep Learning Prerequisites: Logistic Regression in Python, Data science, machine learning, and artificial intelligence in Python for students and professionals Created by Lazy Programmer Inc. English [Auto], Portuguese [Auto]Preview this Course - GET COUPON CODE This course is a lead-in to deep learning and neural networks - it covers a popular and fundamental technique used in machine learning, data science and statistics: logistic regression. We cover the theory from the ground up: derivation of the solution, and applications to real-world problems. We show you how one might code their own logistic regression module in Python. This course does not require any external materials. Everything needed (Python, and some Python libraries) can be obtained for free.


Get Your FREE LIGHT SAVER

#artificialintelligence

If you were to purchase the Light Saver on Liberty Technologies website it would cost you $49.97. A fair price for a tactical torch that saves you money, offers you various lighting options, and is lightweight and compact for every day carry. But, we can get you one for Free today. Today we're giving these $50 tactical torches away for Free to promote this product and to give our two brands some exposure. We just ask that you cover the cost of the stamps to get it to your home.


Top Deep Learning Based Time Series Methods

#artificialintelligence

The components of time-series can be as complex and sophisticated as the data itself. With every passing second, the data obtained multiplies and modelling becomes tricky. For instance, social media platforms, the data handling chores get worse with their increasing popularity. Twitter stores 1.5 petabytes of logical time series data and handles 25K query requests per minute. There are more critical applications of time series modelling, such as IoT and on various edge devices.


Training your Neural Network with Cyclical Learning Rates – MachineCurve

#artificialintelligence

At a high level, training supervised machine learning models involves a few easy steps: feeding data to your model, computing loss based on the differences between predictions and ground truth, and using loss to improve the model with an optimizer. For example, it's possible to choose multiple optimizers – ranging from traditional Stochastic Gradient Descent to adaptive optimizers, which are also very common today. Say that you settle for the first – Stochastic Gradient Descent (SGD). Likely, in your deep learning framework, you'll see that the learning rate is a parameter that can be configured, with a default value that is preconfigured most of the times. Now, what is this learning rate? Why do we need them?


Machine Learning & Deep Learning in Python & R

#artificialintelligence

In this section we will learn - What does Machine Learning mean. What are the meanings or different terms associated with machine learning? You will see some examples so that you understand what machine learning actually is. It also contains steps involved in building a machine learning model, not just linear models, any machine learning model.


Creating the Whole Machine Learning Pipeline with PyCaret

#artificialintelligence

This tutorial covers the entire ML process, from data ingestion, pre-processing, model training, hyper-parameter fitting, predicting and storing the model for later use. Let's see the whole picture Recreating the entire experiment without PyCaret requires more than 100 lines of code in most libraries. The library also allows you to do more advanced things, such as advanced pre-processing, ensembling, generalized stacking, and other techniques that allow you to fully customize the ML pipeline and are a must for any data scientist. PyCaret is an open source, low-level library for ML with Python that allows you to go from preparing your data to deploying your model in minutes. Allows scientists and data analysts to perform iterative data science experiments from start to finish efficiently and allows them to reach conclusions faster because much less time is spent on programming.


Exoplanet Detection using Machine Learning

#artificialintelligence

We introduce a new machine learning based technique to detect exoplanets using the transit method. Machine learning and deep learning techniques have proven to be broadly applicable in various scientific research areas. We aim to exploit some of these methods to improve the conventional algorithm based approach used in astrophysics today to detect exoplanets. We used the popular time-series analysis library'TSFresh' to extract features from lightcurves. For each lightcurve, we extracted 789 features.