Goto

Collaborating Authors

decision tree learning


Fine-Tuning ML Hyperparameters

#artificialintelligence

"Just as electricity transformed almost every industry 100 years ago, today I actually have hard time thinking of an industry that I don't think AI (Artificial Intelligence) will transform in the next several years" -- Andrew NG I have long been fascinated with these algorithms, capable of something that we can as humans barely begin to comprehend. However, even with all these resources one of the biggest setbacks any ML practitioner has ever faced would be tuning the model's hyperparameters. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned. The same kind of machine learning model can be trained on different constraints, learning rates or kernels and other such parameters to generalize to different datasets, and hence these instructions have to be tuned so that the model can optimally solve the machine learning problem.


Why White-Box Models in Enterprise Data Science Work More Efficiently

#artificialintelligence

Data science is the current powerhouse for organizations, turning mountains of data into actionable business insights that impact every part of the business, including customer experience, revenue, operations, risk management and other functions. Data science has the potential to dramatically accelerate digital transformation initiatives, delivering greater performance and advantages over the competition. However, not all data science platforms and methodologies are created equal. The ability to use data science to make predictions and take decisions that optimize business outcome requires transparency and accountability. There are several underlying factors such as trust, having confidence in the prediction and understanding how the technology works, but fundamentally it comes down to whether the platform uses a black-box or white-box model approach.


Machine Learning: An Introduction to Decision Trees

#artificialintelligence

Machine Learning for trading is the new buzz word today and some of the tech companies are doing wonderful unimaginable things with it. Today, we're going to show you, how you can predict stock movements (that's either up or down) with the help of'Decision Trees', one of the most commonly used ML algorithms. Decision trees in Machine Learning are used for building classification and regression models to be used in data mining and trading. A decision tree algorithm performs a set of recursive actions before it arrives at the end result and when you plot these actions on a screen, the visual looks like a big tree, hence the name'Decision Tree'. Basically, a decision tree is a flowchart to help you make decisions.


Why Your Company Needs White-Box Models in Enterprise Data Science - AI Trends

#artificialintelligence

AI is having a profound impact on customer experience, revenue, operations, risk management and other business functions across multiple industries. When fully operationalized, AI and Machine Learning (ML) enable organizations to make data-driven decisions with unprecedented levels of speed, transparency, and accountability. This dramatically accelerates digital transformation initiatives delivering greater performance and a competitive edge to organizations. ML projects in data science labs tend to adopt black-box approaches that generate minimal actionable insights and result in a lack of accountability in the data-driven decision-making process. Today with the advent of AutoML 2.0 platforms, a white-box model approach is becoming increasingly important and possible.


FastForest: Increasing Random Forest Processing Speed While Maintaining Accuracy

arXiv.org Machine Learning

Random Forest remains one of Data Mining's most enduring ensemble algorithms, achieving well-documented levels of accuracy and processing speed, as well as regularly appearing in new research. However, with data mining now reaching the domain of hardware-constrained devices such as smartphones and Internet of Things (IoT) devices, there is continued need for further research into algorithm efficiency to deliver greater processing speed without sacrificing accuracy. Our proposed FastForest algorithm delivers an average 24% increase in processing speed compared with Random Forest whilst maintaining (and frequently exceeding) it on classification accuracy over tests involving 45 datasets. FastForest achieves this result through a combination of three optimising components - Subsample Aggregating ('Subbagging'), Logarithmic Split-Point Sampling and Dynamic Restricted Subspacing. Moreover, detailed testing of Subbagging sizes has found an optimal scalar delivering a positive mix of processing performance and accuracy.


Probabilistic Diagnostic Tests for Degradation Problems in Supervised Learning

arXiv.org Artificial Intelligence

Several studies point out different causes of performance degradation in supervised machine learning. Problems such as class imbalance, overlapping, small-disjuncts, noisy labels, and sparseness limit accuracy in classification algorithms. Even though a number of approaches either in the form of a methodology or an algorithm try to minimize performance degradation, they have been isolated efforts with limited scope. Most of these approaches focus on remediation of one among many problems, with experimental results coming from few datasets and classification algorithms, insufficient measures of prediction power, and lack of statistical validation for testing the real benefit of the proposed approach. This paper consists of two main parts: In the first part, a novel probabilistic diagnostic model based on identifying signs and symptoms of each problem is presented. Thereby, early and correct diagnosis of these problems is to be achieved in order to select not only the most convenient remediation treatment but also unbiased performance metrics. Secondly, the behavior and performance of several supervised algorithms are studied when training sets have such problems. Therefore, prediction of success for treatments can be estimated across classifiers.


A survey of bias in Machine Learning through the prism of Statistical Parity for the Adult Data Set

arXiv.org Machine Learning

Applications based on Machine Learning models have now become an indispensable part of the everyday life and the professional world. A critical question then recently arised among the population: Do algorithmic decisions convey any type of discrimination against specific groups of population or minorities? In this paper, we show the importance of understanding how a bias can be introduced into automatic decisions. We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting. We then propose to quantify the presence of bias by using the standard Disparate Impact index on the real and well-known Adult income data set. Finally, we check the performance of different approaches aiming to reduce the bias in binary classification outcomes. Importantly, we show that some intuitive methods are ineffective. This sheds light on the fact trying to make fair machine learning models may be a particularly challenging task, in particular when the training observations contain a bias.


Unpack Local Model Interpretation for GBDT

arXiv.org Machine Learning

Because GBDT inherits the good performance from its ensemble essence, much attention has been drawn to the optimization of this model. With its popularization, an increasing need for model interpretation arises. Besides the commonly used feature importance as a global interpretation, feature contribution is a local measure that reveals the relationship between a specific instance and the related output. This work focuses on the local interpretation and proposes an unified computation mechanism to get the instance-level feature contributions for GBDT in any version. Practicality of this mechanism is validated by the listed experiments as well as applications in real industry scenarios.


Machine Learning in GIS: Understand the Theory and Practice

#artificialintelligence

This course is designed to equip you with the theoretical and practical knowledge of Machine Learning as applied for geospatial analysis, namely Geographic Information Systems (GIS) and Remote Sensing. By the end of the course, you will feel confident and completely understand the Machine Learning applications in GIS technology and how to use Machine Learning algorithms for various geospatial tasks, such as land use and land cover mapping (classifications) and object-based image analysis (segmentation). This course will also prepare you for using GIS with open source and free software tools. In the course, you will be able to apply such Machine Learning algorithms as Random Forest, Support Vector Machines and Decision Trees (and others) for classification of satellite imagery. On top of that, you will practice GIS by completing an entire GIS project by exploring the power of Machine Learning, cloud computing and Big Data analysis using Google Erath Engine for any geographic area in the world.


SAS and R Integration for Machine Learning

#artificialintelligence

R first appeared in 1993 and has gained a steady and fiercely loyal fan base. But as data sets become both longer and wider, storage and processing speeds become an issue. Having spent weeks whipping an extremely wide and messy data set into shape using only R, I am so grateful for SAS Viya and not having to go through that again. SAS Viya is a cloud-enabled, in-memory analytics engine which allows for rapid analytics insights. SAS Viya utilizes the SAS Cloud Analytics Services (CAS) to perform various actions and tasks.