Goto

Collaborating Authors

Statistical Attribution & Optimization in the B2B World.

@machinelearnbot

There has been a lot of activity recently around revenue attribution - marketers want to develop a better understanding of their customer acquisition funnel and be able to measure progress against it. Most of this attention has been focused on the B2C space. However, less work has been done measuring the performance of B2B marketing activities. While Salesforce is an excellent platform for managing leads and campaigns, their business model is founded on developing a sales and marketing ecosystem comprising partnerships with specialist vendors that can provide more focused solutions to specific sales and marketing issues. As a result, companies such as Full Circle Insights, Bright Funnel and Bizable have emerged to fill the void in B2B marketing attribution by leveraging the Salesforce platform.


Statistical Attribution & Optimization in the B2B World.

@machinelearnbot

There has been a lot of activity recently around revenue attribution - marketers want to develop a better understanding of their customer acquisition funnel and be able to measure progress against it. Most of this attention has been focused on the B2C space. However, less work has been done measuring the performance of B2B marketing activities. While Salesforce is an excellent platform for managing leads and campaigns, their business model is founded on developing a sales and marketing ecosystem comprising partnerships with specialist vendors that can provide more focused solutions to specific sales and marketing issues. As a result, companies such as Full Circle Insights, Bright Funnel and Bizable have emerged to fill the void in B2B marketing attribution by leveraging the Salesforce platform.


Learning Explainable Models Using Attribution Priors

arXiv.org Machine Learning

Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by constraining the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. We propose connecting these topics with attribution priors (https://github.com/suinleelab/attributionpriors), which allow humans to use the common language of attributions to enforce prior expectations about a model's behavior during training. We develop a differentiable axiomatic feature attribution method called expected gradients and show how to directly regularize these attributions during training. We demonstrate the broad applicability of attribution priors ($\Omega$) by presenting three distinct examples that regularize models to behave more intuitively in three different domains: 1) on image data, $\Omega_{\textrm{pixel}}$ encourages models to have piecewise smooth attribution maps; 2) on gene expression data, $\Omega_{\textrm{graph}}$ encourages models to treat functionally related genes similarly; 3) on a health care dataset, $\Omega_{\textrm{sparse}}$ encourages models to rely on fewer features. In all three domains, attribution priors produce models with more intuitive behavior and better generalization performance by encoding constraints that would otherwise be very difficult to encode using standard model priors.


Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

arXiv.org Artificial Intelligence

With the rise of deep neural networks, the challenge of explaining the predictions of these networks has become increasingly recognized. While many methods for explaining the decisions of deep neural networks exist, there is currently no consensus on how to evaluate them. On the other hand, robustness is a popular topic for deep learning research; however, it is hardly talked about in explainability until very recently. In this tutorial paper, we start by presenting gradient-based interpretability methods. These techniques use gradient signals to assign the burden of the decision on the input features. Later, we discuss how gradient-based methods can be evaluated for their robustness and the role that adversarial robustness plays in having meaningful explanations. We also discuss the limitations of gradient-based methods. Finally, we present the best practices and attributes that should be examined before choosing an explainability method. We conclude with the future directions for research in the area at the convergence of robustness and explainability.


Consistent feature attribution for tree ensembles

arXiv.org Machine Learning

Note that a newer expanded version of this paper is now available at: arXiv:1802.03888 It is critical in many applications to understand what features are important for a model, and why individual predictions were made. For tree ensemble methods these questions are usually answered by attributing importance values to input features, either globally or for a single prediction. Here we show that current feature attribution methods are inconsistent, which means changing the model to rely more on a given feature can actually decrease the importance assigned to that feature. To address this problem we develop fast exact solutions for SHAP (SHapley Additive exPlanation) values, which were recently shown to be the unique additive feature attribution method based on conditional expectations that is both consistent and locally accurate. We integrate these improvements into the latest version of XGBoost, demonstrate the inconsistencies of current methods, and show how using SHAP values results in significantly improved supervised clustering performance. Feature importance values are a key part of understanding widely used models such as gradient boosting trees and random forests, so improvements to them have broad practical implications.