Goto

Collaborating Authors

Statistical Learning


How to Build Machine Learning Model using SQL

#artificialintelligence

A label is a variable to be predicted. In this example, I will predict whether the website visitor will make any transactions and I gave this label the name "purchase". This can be derived from the existing variable "totals.transactions". For simplicity, let's make this prediction a black or white situation, either "purchase" or "no purchase". Since the model training cannot handle string value as the output result, therefore it is necessary to code them into numbers.


Video Highlights: Deep Learning for Probabilistic Time Series Forecasting - insideBIGDATA

#artificialintelligence

In this Data Science Salon talk, Kashif Rasul, Principal Research Scientist at Zalando, presents some modern probabilistic time series forecasting methods using deep learning. The Data Science Salon is a unique vertical focused conference which grew into the most diverse community of senior data science, machine learning and other technical specialists in the space.


How I'd Learn Data Science If I Were To Start All Over Again

#artificialintelligence

A couple of days ago I started thinking if I had to start learning machine learning and data science all over again where would I start? The funny thing was that the path that I imagined was completely different from that one that I actually did when I was starting. I'm aware that we all learn in different ways. Some prefer videos, others are ok with just books and a lot of people need to pay for a course to feel more pressure. And that's ok, the important thing is to learn and enjoy it. So, talking from my own perspective and knowing how I learn better I designed this path if I had to start learning Data Science again.


PostgreSQL as the substructure for IoT and the next wave of computing

#artificialintelligence

How PostgreSQL accidentally became the ideal platform for IoT applications and services. From mainframes (1950s-1970s), to Personal Computers (1980s-1990s), to smartphones (2000s-now), each wave brought us smaller, yet more powerful machines, that were increasingly plentiful and pervasive throughout business and society. We are now sitting on the cusp of another inflection point, or major release if you will, with computing so small and so common that it is becoming nearly as pervading as the air we breathe. With each wave, software developers and businesses initially struggle to identify the appropriate software infrastructure on which to develop their applications. But soon common platforms emerge: Unix; Windows; the LAMP stack; iOS/Android.


AI Without BS -- Newsletter #2

#artificialintelligence

"Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. The resulting models represent dynamical systems with varying (i.e., liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks." The Machine Learning Model Anonymization tool from IBM: "Traditional data anonymization algorithms don't consider the specific analysis the data is being used for. What if a 10-year range of ages is too general for an organization's needs? When these anonymization techniques are applied in the context of machine learning, they tend to significantly degrade the model's accuracy. The tool anonymizes machine learning models while being guided by the model itself. The method is agnostic to the specific learning algorithm and can be easily applied to any machine learning model, making it easy to integrate into existing MLOps pipelines."


Logistic Regression for Binary Classification

#artificialintelligence

In previous articles, I talked about deep learning and the functions used to predict results. In this article, we will use logistic regression to perform binary classification. Binary classification is named this way because it classifies the data into two results. Simply put, the result will be "yes" (1) or "no" (0). To determine whether the result is "yes" or "no", we will use a probability function: This probability function will give us a number from 0 to 1 indicating how likely this observation will belong to the classification that we have currently determined to be "yes".


Why So Many Data Science Projects Fail to Deliver

#artificialintelligence

This article is based on an in-depth study of the data science efforts in three large, private-sector Indian banks with collective assets exceeding $200 million. The study included onsite observations; semistructured interviews with 57 executives, managers, and data scientists; and the examination of archival records. The five obstacles and the solutions for overcoming them emerged from an inductive analytical process based on the qualitative data. More and more companies are embracing data science as a function and a capability. But many of them have not been able to consistently derive business value from their investments in big data, artificial intelligence, and machine learning.1


Simple Linear Regression Tutorial for Machine Learning (ML)

#artificialintelligence

Simple linear regression is a statistical approach that allows us to study and summarize the relationship between two continuous quantitative variables. Simple linear regression is used in machine learning models, mathematics, statistical modeling, forecasting epidemics, and other quantitative fields. Out of the two variables, one variable is called the dependent variable, and the other variable is called the independent variable. Our goal is to predict the dependent variable's value based on the value of the independent variable. A simple linear regression aims to find the best relationship between X (independent variable) and Y (dependent variable).


Unlocking eCommerce growth with machine learning and behavioural psychology

#artificialintelligence

Olist is the largest eCommerce website in Brazil. It connects small retailers from all over the country to sell directly to customers. The business has generously shared a large dataset containing 110k orders on its site from 2016 to 2018. The SQL-style relational database includes customers and their orders in the site, which contains around 100k unique orders and 73 categories. It also includes item prices, timestamps, reviews, and gelocation associated with the order.


Bayesian Hyperparameter Optimization with tune-sklearn in PyCaret - KDnuggets

#artificialintelligence

In this post, I will show you how easy it is to use other state-of-the-art algorithms with PyCaret thanks to tune-sklearn, a drop-in replacement for scikit-learn's model selection module with cutting edge hyperparameter tuning techniques. I'll also report results from a series of benchmarks, showing how tune-sklearn is able to easily improve classification model performance. Hyperparameter optimization algorithms can vary greatly in efficiency. Random search has been a machine learning staple and for a good reason: it's easy to implement, understand and gives good results in reasonable time. However, as the name implies, it is completely random -- a lot of time can be spent on evaluating bad configurations.