Goto

Collaborating Authors

 Coston, Amanda


Neural Topic Models with Survival Supervision: Jointly Predicting Time-to-Event Outcomes and Learning How Clinical Features Relate

arXiv.org Machine Learning

In time-to-event prediction problems, a standard approach to estimating an interpretable model is to use Cox proportional hazards, where features are selected based on lasso regularization or stepwise regression. However, these Cox-based models do not learn how different features relate. As an alternative, we present an interpretable neural network approach to jointly learn a survival model to predict time-to-event outcomes while simultaneously learning how features relate in terms of a topic model. In particular, we model each subject as a distribution over "topics", which are learned from clinical features as to help predict a time-to-event outcome. From a technical standpoint, we extend existing neural topic modeling approaches to also minimize a survival analysis loss function. We study the effectiveness of this approach on seven healthcare datasets on predicting time until death as well as hospital ICU length of stay, where we find that neural survival-supervised topic models achieves competitive accuracy with existing approaches while yielding interpretable clinical "topics" that explain feature relationships.


Counterfactual Predictions under Runtime Confounding

arXiv.org Machine Learning

Algorithmic tools are increasingly prevalent in domains such as health care, education, lending, criminal justice, and child welfare [2, 7, 12, 15, 30]. In many cases, the tools are not intended to replace human decision-making, but rather to distill rich case information into a simpler form, such as a risk score, to inform human decision makers [1, 9]. The type of information that these tools need to convey is often counterfactual in nature. Decision-makers need to know what is likely to happen if they choose to take a particular action. For instance, an undergraduate program advisor determining which students to recommend for a personalized case management program might wish to know the likelihood that a given student will graduate if enrolled in the program. In criminal justice, a parole board determining whether to release an offender may wish to know the likelihood that the offender will succeed on parole under different possible levels of supervision intensity. A common challenge to developing valid counterfactual prediction models is that all the data available for training and evaluation is observational: the data reflects historical decisions and outcomes under those decisions rather than randomized trials intended to assess outcomes under different policies. If the data is confounded--that is, if there are factors not captured in the data that influenced both the outcome of interest and historical decisions--valid counterfactual prediction may not be possible.


Conditional Learning of Fair Representations

arXiv.org Artificial Intelligence

We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. In settings that have historically had discrimination, we are interested in defining fairness with respect to a protected group, the group which has historically been disadvantaged. Among many recent attempts to achieve algorithmic fairness (Dwork et al., 2012; Hardt et al., 2016; Zemel et al., 2013; Zafar et al., 2015), learning fair representations has attracted increasing attention However, it has long been empirically observed (Calders et al., 2009) and recently been proved (Zhao Part of this work was done when Han Zhao was visiting the V ector Institute, Toronto. In this work, we provide an affirmative answer to the above question by proposing an algorithm to align the conditional distributions (on the target variable) of representations across different demographic subgroups.


Counterfactual Risk Assessments, Evaluation, and Fairness

arXiv.org Machine Learning

Algorithmic risk assessments are increasingly used to help humans make decisions in high-stakes settings, such as medicine, criminal justice and education. In each of these cases, the purpose of the risk assessment tool is to inform actions, such as medical treatments or release conditions, often with the aim of reducing the likelihood of an adverse event such as hospital readmission or recidivism. Problematically, most tools are trained and evaluated on historical data in which the outcomes observed depend on the historical decision-making policy. These tools thus reflect risk under the historical policy, rather than under the different decision options that the tool is intended to inform. Even when tools are constructed to predict risk under a specific decision, they are often improperly evaluated as predictors of the target outcome. Focusing on the evaluation task, in this paper we define counterfactual analogues of common predictive performance and algorithmic fairness metrics that we argue are better suited for the decision-making context. We introduce a new method for estimating the proposed metrics using doubly robust estimation. We provide theoretical results that show that only under strong conditions can fairness according to the standard metric and the counterfactual metric simultaneously hold. Consequently, fairness-promoting methods that target parity in a standard fairness metric may --- and as we show empirically, do --- induce greater imbalance in the counterfactual analogue. We provide empirical comparisons on both synthetic data and a real world child welfare dataset to demonstrate how the proposed method improves upon standard practice.


Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact

arXiv.org Machine Learning

This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018