decision tree


Decision Tree Algorithm, Explained - KDnuggets

#artificialintelligence

Classification is a two-step process, learning step and prediction step, in machine learning. In the learning step, the model is developed based on given training data. In the prediction step, the model is used to predict the response for given data. Decision Tree is one of the easiest and popular classification algorithms to understand and interpret. Decision Tree algorithm belongs to the family of supervised learning algorithms.


Combating Insurance Fraud With Machine Learning Fintech Finance

#artificialintelligence

Most insurance companies depend on human expertise and business rules-based software to protect themselves from fraud. And the drive for digital transformation and process automation means data and scenarios change faster than you can update the rules. Machine learning has the potential to allow insurers to move from the current state of "detect and react" to "predict and prevent." It excels at automating the process of taking large volumes of data, analysing multiple fraud indicators in parallel – which taken individually may often be quite normal – and finding potential fraud. Generally, there are two ways to teach or train a machine learning algorithm, which depend on the available data: supervised and unsupervised learning.


Tracking Dynamic Sources of Malicious Activity at Internet Scale

Neural Information Processing Systems

We formulate and address the problem of discovering dynamic malicious regions on the Internet. We model this problem as one of adaptively pruning a known decision tree, but with additional challenges: (1) severe space requirements, since the underlying decision tree has over 4 billion leaves, and (2) a changing target function, since malicious activity on the Internet is dynamic. We present a novel algorithm that addresses this problem, by putting together a number of different experts algorithms and online paging algorithms. We prove guarantees on our algorithms performance as a function of the best possible pruning of a similar size, and our experiments show that our algorithm achieves high accuracy on large real-world data sets, with significant improvements over existing approaches. Papers published at the Neural Information Processing Systems Conference.



Efficiently Learning Fourier Sparse Set Functions

Neural Information Processing Systems

Learning set functions is a key challenge arising in many domains, ranging from sketching graphs to black-box optimization with discrete parameters. In this paper we consider the problem of efficiently learning set functions that are defined over a ground set of size $n$ and that are sparse (say $k$-sparse) in the Fourier domain. This is a wide class, that includes graph and hypergraph cut functions, decision trees and more. Our central contribution is the first algorithm that allows learning functions whose Fourier support only contains low degree (say degree $d o(n)$) polynomials using $O(k d \log n)$ sample complexity and runtime $O( kn \log 2 k \log n \log d)$. This implies that sparse graphs with $k$ edges can, for the first time, be learned from $O(k \log n)$ observations of cut values and in linear time in the number of vertices.


Minimal Variance Sampling in Stochastic Gradient Boosting

Neural Information Processing Systems

Stochastic Gradient Boosting (SGB) is a widely used approach to regularization of boosting models based on decision trees. It was shown that, in many cases, random sampling at each iteration can lead to better generalization performance of the model and can also decrease the learning time. Different sampling approaches were proposed, where probabilities are not uniform, and it is not currently clear which approach is the most effective. In this paper, we formulate the problem of randomization in SGB in terms of optimization of sampling probabilities to maximize the estimation accuracy of split scoring used to train decision trees.This optimization problem has a closed-form nearly optimal solution, and it leads to a new sampling technique, which we call Minimal Variance Sampling (MVS).The method both decreases the number of examples needed for each iteration of boosting and increases the quality of the model significantly as compared to the state-of-the art sampling methods. The superiority of the algorithm was confirmed by introducing MVS as a new default option for subsampling in CatBoost, a gradient boosting library achieving state-of-the-art quality on various machine learning tasks.


Decision Jungles: Compact and Rich Models for Classification

Neural Information Processing Systems

Randomized decision trees and forests have a rich history in machine learning and have seen considerable success in application, perhaps particularly so for computer vision. However, they face a fundamental limitation: given enough data, the number of nodes in decision trees will grow exponentially with depth. For certain applications, for example on mobile or embedded processors, memory is a limited resource, and so the exponential growth of trees limits their depth, and thus their potential accuracy. This paper proposes decision jungles, revisiting the idea of ensembles of rooted decision directed acyclic graphs (DAGs), and shows these to be compact and powerful discriminative models for classification. Unlike conventional decision trees that only allow one path to every node, a DAG in a decision jungle allows multiple paths from the root to each leaf.


Local Decorrelation For Improved Pedestrian Detection

Neural Information Processing Systems

Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods.


Label Distribution Learning Forests

Neural Information Processing Systems

Label distribution learning (LDL) is a general learning framework, which assigns to an instance a distribution over a set of labels rather than a single label or multiple labels. Current LDL methods have either restricted assumptions on the expression form of the label distribution or limitations in representation learning, e.g., to learn deep features in an end-to-end manner. This paper presents label distribution learning forests (LDLFs) - a novel label distribution learning algorithm based on differentiable decision trees, which have several advantages: 1) Decision trees have the potential to model any general form of label distributions by a mixture of leaf node predictions. We define a distribution-based loss function for a forest, enabling all the trees to be learned jointly, and show that an update function for leaf node predictions, which guarantees a strict decrease of the loss function, can be derived by variational bounding. The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods.


Series: The AI Evolution in Commercial Pharma

#artificialintelligence

Over the last decade we have seen a transformation in global pharma's commercial model as the industry rightsized itself from the blockbuster era of the '90s and early millennia. The era of competing on share of voice enabled by armies of sales representatives calling on health care professionals evolved into smarter and more sophisticated strategies for deploying sales and marketing resources--and doing more with less. Over the same period, we have witnessed a significant evolution in the rise of advanced analytics and especially the talk and the promise of Artificial Intelligence (AI) in commercial pharma, so it's time to ask the question--Can AI really help pharma proposer? McKinsey tends to think so. In a report entitled "Artificial Intelligence in Business", they concluded that AI and analytics would contribute $440 Billion in potential annual value in the pharmaceutical and medical device sector, of which the major share of over $200B resulted in value released from marketing and sales.