Goto

Collaborating Authors

Results


On Moving from Statistics to Machine Learning, the Final Stage of Grief

#artificialintelligence

I've spent the last few months preparing for and applying for data science jobs. It's possible the data science world may reject me and my lack of both experience and a credential above a bachelors degree, in which case I'll do something else. Regardless of what lies in store for my future, I think I've gotten a good grasp of the mindset underlying machine learning and how it differs from traditional statistics, so I thought I'd write about it for those who have a similar background to me considering a similar move.1 This post is geared toward people who are excellent at statistics but don't really "get" machine learning and want to understand the gist of it in about 15 minutes of reading. If you have a traditional academic stats backgrounds (be it econometrics, biostatistics, psychometrics, etc.), there are two good reasons to learn more about data science: The world of data science is, in many ways, hiding in plain sight from the more academically-minded quantitative disciplines.


2 books to strengthen your command of python machine learning

#artificialintelligence

This post is part of "AI education", a series of posts that review and explore educational content on data science and machine learning. Mastering machine learning is not easy, even if you're a crack programmer. I've seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. And every single one of them has been dismayed. I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer.


Data Analytics and Mining for Dummies – Data Science Blog (English only)

#artificialintelligence

Data Analytics and Mining is often perceived as an extremely tricky task cut out for Data Analysts and Data Scientists having a thorough knowledge encompassing several different domains such as mathematics, statistics, computer algorithms and programming. However, there are several tools available today that make it possible for novice programmers or people with no absolutely no algorithmic or programming expertise to carry out Data Analytics and Mining. One such tool which is very powerful and provides a graphical user interface and an assembly of nodes for ETL: Extraction, Transformation, Loading, for modeling, data analysis and visualization without, or with only slight programming is the KNIME Analytics Platform. KNIME, or the Konstanz Information Miner, was developed by the University of Konstanz and is now popular with a large international community of developers. Initially KNIME was originally made for commercial use but now it is available as an open source software and has been used extensively in pharmaceutical research since 2006 and also a powerful data mining tool for the financial data sector. It is also frequently used in the Business Intelligence (BI) sector.


Visualizing Principal Components for Images - Hi! I am Nagdev

#artificialintelligence

Principal Component Analysis (PCA) is a great tool for a data analysis projects for a lot of reasons. If you have never heard of PCA, in simple words it does a linear transformation of your features using covariance or correlation. I will add a few links below if you want to know more about it. Some of the applications of PCA are dimensional reduction, feature analysis, data compression, anomaly detection, clustering and many more. The first time I learnt about PCA, it was not easy to understand and quite confusing.


Visualizing High-Dimensional Microbiome Data

#artificialintelligence

To follow along, you can either download our Jupyter notebook here, or continue reading and typing in the following code as you proceed through the walkthrough. Unsupervised machine learning methods can allow us to understand and explore data in situations where we are not given explicit labels. One type of unsupervised machine learning methods falls under the family of clustering. Getting a general idea of groups or clusters of similar data points can inform us of any underlying structural patterns in our data, such as geography, functional similarities, or communities when we otherwise would not know this information beforehand. We will be applying our dimensional reduction techniques to Microbiome data acquired from UCSD's Qiita platform.


kNN Imputation for Missing Values in Machine Learning

#artificialintelligence

Datasets may have missing values, and this can cause problems for many machine learning algorithms. As such, it is good practice to identify and replace missing values for each column in your input data prior to modeling your prediction task. This is called missing data imputation, or imputing for short. A popular approach to missing data imputation is to use a model to predict the missing values. This requires a model to be created for each input variable that has missing values.


Data Preparation for Machine Learning (7-Day Mini-Course)

#artificialintelligence

Data preparation involves transforming raw data into a form that is more appropriate for modeling. Preparing data may be the most important part of a predictive modeling project and the most time-consuming, although it seems to be the least discussed. Instead, the focus is on machine learning algorithms, whose usage and parameterization has become quite routine. Practical data preparation requires knowledge of data cleaning, feature selection data transforms, dimensionality reduction, and more. In this crash course, you will discover how you can get started and confidently prepare data for a predictive modeling project with Python in seven days. This is a big and important post.


Predicting Used Car Prices with Machine Learning

#artificialintelligence

The prices of new cars in the industry is fixed by the manufacturer with some additional costs incurred by the Government in the form of taxes. So, customers buying a new car can be assured of the money they invest to be worthy. But due to the increased price of new cars and the incapability of customers to buy new cars due to the lack of funds, used cars sales are on a global increase (Pal, Arora and Palakurthy, 2018). There is a need for a used car price prediction system to effectively determine the worthiness of the car using a variety of features. Even though there are websites that offers this service, their prediction method may not be the best. Besides, different models and systems may contribute on predicting power for a used car's actual market value. It is important to know their actual market value while both buying and selling. To be able to predict used cars market value can help both buyers and sellers. Used car sellers (dealers): They are one of the biggest target group that can be interested in results of this study. If used car sellers better understand what makes a car desirable, what the important features are for a used car, then they may consider this knowledge and offer a better service. Online pricing services: There are websites that offers an estimate value of a car. They may have a good prediction model.


Handling Missing Data For Advanced Machine Learning

#artificialintelligence

Throughout this article, you will become good at spotting, understanding, and imputing missing data. We demonstrate various imputation techniques on a real-world logistic regression task using Python. Properly handling missing data has an improving effect on inferences and predictions. This is not to be ignored. The first part of this article presents the framework for understanding missing data.


A closer look at SageMaker Studio, AWS' machine learning IDE

#artificialintelligence

Back in December, when AWS launched its new machine learning IDE, SageMaker Studio, we wrote up a "hot-off-the-presses" review. At the time, we felt the platform fell short, but we promised to publish an update after working with AWS to get more familiar with the new capabilities. When Amazon launched SageMaker Studio, they made clear the pain points they were aiming to solve: "The machine learning development workflow is still very iterative, and is challenging for developers to manage due to the relative immaturity of ML tooling." The machine learning workflow -- from data ingestion, feature engineering, and model selection to debugging, deployment, monitoring, and maintenance, along with all the steps in between -- can be like trying to tame a wild animal. To solve this challenge, big tech companies have built their own machine learning and big data platforms for their data scientists to use: Uber has Michelangelo, Facebook (and likely Instagram and WhatsApp) has FBLearner flow, Google has TFX, and Netflix has both Metaflow and Polynote (the latter has been open sourced).