Goto

Collaborating Authors

Statistical Learning


Logistic Regression Clearly Explained

#artificialintelligence

Logistic Regression is the most widely used classification algorithm in machine learning. It is used in many real-world scenarios like spam detected, cancer detection, IRIS dataset, etc. Mostly it is used in binary classification problems. But it can also be used in multiclass classification. Logistic Regression predicts the probability that the given data point belongs to a certain class or not. In this article, I will be using the famous heart disease dataset from Kaggle.


Applied Machine Learning Models For Improved Startup Valuation.

#artificialintelligence

Determining the valuation of an early-stage Startup is in most cases very challenging due limited historical data, little to no existing revenues, market uncertainty and many more. Traditional valuation techniques, such as Discounted Cash Flow (DCF) or Multiples (CCA), therefore often lead to inappropriate results. On the other hand, alternative valuation methods remain subject to an individual's subjective assessment and a black box for others. Therefore, the underlying study leverages machine learning algorithms to predict a fair, data-driven and comprehensible startup valuations. Three different data sources are merged and applied to three regression models.


The Best Free Data Science eBooks: 2020 Update - KDnuggets

#artificialintelligence

Description: This book provides essential language and tools for understanding statistics, randomness, and uncertainty. The book explores a wide variety of applications and examples, ranging from coincidences and paradoxes to Google PageRank and Markov chain Monte Carlo (MCMC). Additional application areas explored include genetics, medicine, computer science, and information theory. The authors present the material in an accessible style and motivate concepts using real-world examples. Be prepared, it is a big book!. Also, check out their great probability cheat sheet here.


Machine learning prediction in cardiovascular diseases: a meta-analysis

#artificialintelligence

Several machine learning (ML) algorithms have been increasingly utilized for cardiovascular disease prediction. We aim to assess and summarize the overall predictive ability of ML algorithms in cardiovascular diseases. A comprehensive search strategy was designed and executed within the MEDLINE, Embase, and Scopus databases from database inception through March 15, 2019. The primary outcome was a composite of the predictive ability of ML algorithms of coronary artery disease, heart failure, stroke, and cardiac arrhythmias. Of 344 total studies identified, 103 cohorts, with a total of 3,377,318 individuals, met our inclusion criteria. For the prediction of coronary artery disease, boosting algorithms had a pooled area under the curve (AUC) of 0.88 (95% CI 0.84–0.91), and custom-built algorithms had a pooled AUC of 0.93 (95% CI 0.85–0.97). For the prediction of stroke, support vector machine (SVM) algorithms had a pooled AUC of 0.92 (95% CI 0.81–0.97), boosting algorithms had a pooled AUC of 0.91 (95% CI 0.81–0.96), and convolutional neural network (CNN) algorithms had a pooled AUC of 0.90 (95% CI 0.83–0.95). Although inadequate studies for each algorithm for meta-analytic methodology for both heart failure and cardiac arrhythmias because the confidence intervals overlap between different methods, showing no difference, SVM may outperform other algorithms in these areas. The predictive ability of ML algorithms in cardiovascular diseases is promising, particularly SVM and boosting algorithms. However, there is heterogeneity among ML algorithms in terms of multiple parameters. This information may assist clinicians in how to interpret data and implement optimal algorithms for their dataset.


Python Data Science with Pandas: Master 12 Advanced Projects

#artificialintelligence

Online Courses Udemy - Python Data Science with Pandas: Master 12 Advanced Projects, Work with Pandas, SQL Databases, JSON, Web APIs & more to master your real-world Machine Learning & Finance Projects Bestseller Created by Alexander Hagmann English [Auto] Students also bought Machine Learning and AI: Support Vector Machines in Python Unsupervised Machine Learning Hidden Markov Models in Python Natural Language Processing with Deep Learning in Python Advanced AI: Deep Reinforcement Learning in Python Deep Learning: Advanced Computer Vision (GANs, SSD, More!) Cutting-Edge AI: Deep Reinforcement Learning in Python Preview this course GET COUPON CODE Description Welcome to the first advanced and project-based Pandas Data Science Course! This Course starts where many other courses end: You can write some Pandas code but you are still struggling with real-world Projects because Real-World Data is typically not provided in a single or a few text/excel files - more advanced Data Importing Techniques are required Real-World Data is large, unstructured, nested and unclean - more advanced Data Manipulation and Data Analysis/Visualization Techniques are required many easy-to-use Pandas methods work best with relatively small and clean Datasets - real-world Datasets require more General Code (incorporating other Libraries/Modules) No matter if you need excellent Pandas skills for Data Analysis, Machine Learning or Finance purposes, this is the right Course for you to get your skills to Expert Level! This Course covers the full Data Workflow A-Z: Import (complex and nested) Data from JSON files. Efficiently import and merge Data from many text/CSV files. Clean, handle and flatten nested and stringified Data in DataFrames.


The disadvantage of MSE Loss and How to Remove Them

#artificialintelligence

Mean Squared Error is one of the most used and most straightforward regression-based loss function in Machine Learning and Data Science. It's is used in a range of tasks such as Linear Regression on tabular data to specific use-cases in computer vision, NLP, Reinforcement Learning, etc. In addition to MSE, MAE is also widely used and is highly similar to MSE Loss. Despite being highly used in Machine Learning, it has its share of flaws, which I would like to highlight in this article. There are specific ways to minimize its weaknesses to get better results, which are discussed at the end. The discussion and use-cases are kept relevant to computer vision for simplicity and better understanding.


Practical Machine Learning Basics

#artificialintelligence

This article describes my attempt at the Titanic Machine Learning competition on Kaggle. I have been trying to study Machine Learning but never got as far as being able to solve real-world problems. But after I read two newly released books about practical AI, I was confident enough to enter the Titanic competition. The first part of the article describes preparing the data. The second part shows how I used a Support Vector Machine (SVM). I used the SVM to create a model that predicts the survival of the passengers of the Titanic. The model resulted in a score of 0.779907, which got me in the top 28% of the competition.


XGBoost vs LightGBM on a High Dimensional Dataset

#artificialintelligence

I have recently completed a multi-class classification problem given as a take-home assignment for a data scientist position. It was a good opportunity to compare the two state-of-the-art implementations of gradient boosting decision trees which are XGBoost and LightGBM. Both algorithms are so powerful that they are prominent among the best performing machine learning models. The dataset contains over 60 thousand observations and 103 numerical features. The target variable contains 9 different classes.


Feature Extraction for Graphs

#artificialintelligence

Heads up: I've structured the article similarly as in the Graph Representation Learning book by William L. Hamilton [1]. One of the simplest ways to capture information from graphs is to create individual features for each node. These features can capture information both from a close neighbourhood, and a more distant, K-hop neighbourhood using iterative methods. Node degree is a simple metric and can be defined as a number of edges incident to a node. This metric is often used as initialization of algorithms to generate more complex graph-level features such as Weisfeiler-Lehman Kernel.


Activation Functions in Deep Learning: From Softmax to Sparsemax -- Math Proof

#artificialintelligence

The objective of this post is three-fold. The first part discusses the motivation behind sparsemax and its relation to softmax, summary of the original research paper in which this activation function was first introduced, and an overview of advantages from using sparsemax. Part two and three are dedicated to the mathematical derivations, concretely finding a closed-form solution as well as an appropriate loss function. In the paper "From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification", Martins et al. propose a new alternative to the widely known softmax activation function by introducing Sparsemax. While softmax is an appropriate choice for multi-class classification that outputs a normalized probability distribution over K probabilities, in many tasks, we want to obtain an output that is more sparse.