Goto

Collaborating Authors

Incomplete Contracting and AI Alignment

arXiv.org Artificial Intelligence

We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.


Fairness, Welfare, and Equity in Personalized Pricing

arXiv.org Machine Learning

We study the interplay of fairness, welfare, and equity considerations Studying the case of personalized pricing is conceptually challenging in personalized pricing based on customer features. Sellers because prices are a shared tool in drastically different are increasingly able to conduct price personalization based on domains: we consider lending/insurance, consumer goods, and public predictive modeling of demand conditional on covariates: setting provision. A crucial distinction is between value-based pricing customized interest rates, targeted discounts of consumer goods, that offers different prices to customers based on their estimated and personalized subsidies of scarce resources with positive externalities willingness to pay, and risk-based pricing which offers different like vaccines and bed nets. These different application areas prices to customers based on their estimated costs, as in lending may lead to different concerns around fairness, welfare, and equity and insurance [34]. While discrimination law is strongest in insurance on different objectives: price burdens on consumers, price envy, and lending, in lending, discrimination concerns often firm revenue, access to a good, equal access, and distributional consequences arise from individual agents providing offers from an actuariallyfair when the good in question further impacts downstream securitized rate sheet [9]. In particular, distributional concerns outcomes of interest. We conduct a comprehensive literature review regarding price optimization reflect overall concern for differentially in order to disentangle these different normative considerations adept/prepared/educated negotiating customers in insurance and propose a taxonomy of different objectives with mathematical and lending, but slight optimism in value-based pricing since lowincome definitions. We focus on observational metrics that do not assume individuals may be more price-sensitive [9]. Hence, the access to an underlying valuation distribution which is either unobserved majority of our analysis will focus on value-based pricing, which due to binary feedback or ill-defined due to overriding lends itself more readily to price optimization.


Locally Interpretable Predictions of Parkinson's Disease Progression

arXiv.org Machine Learning

In precision medicine, machine learning techniques have been commonly proposed to aid physicians in early screening of chronic diseases. Many of these diseases become more difficult to treat as they progress, so accurate early screening is critical to ensure resources are directed towards the most effective treatment plan [Pagan, 2012]. Since the final treatment decision must inevitably be made by a doctor, these screening procedures should be interpretable such that a clinician can explain the decision-making process to patients for informed consent. However, the types of models that achieve the highest level of accuracy given early screening data tend to be extremely complex, meaning that even machine learning experts have difficulties explaining why certain predictions are made, leading many to describe them as "black box" [Breiman, 2001]. In this paper, we bridge this gap by providing a novel approach for explaining black box model predictions which can give high fidelity explanations with lower model complexity. In particular we will focus on early screening of Parkinson's Disease (PD). PD is a complicated neurodegenerative disorder that affects the central nervous system and specifically the motor control of individuals [mjf, 2019]. This disorder is estimated to affect 930,000 individuals in the US by 2020, and is more prevalent in the geriatric population affecting more then 1% of the population over the age of 60 and 5% of the population over age 85 [Findley, 2007, Kowal et al., 2013, Rossi et al., 2018]. These statistics and other recent studies on Parkinson's epidemiology indicate that as the population ages, the prevalence of PD is expected to grow to over 1.2 million by 2030 in the US alone, increasing the total economic burden of the disorder to approximately $26 billion USD [Kowal et al., 2013, Rossi et al., 2018].


Locally Interpretable One-Class Anomaly Detection for Credit Card Fraud Detection

arXiv.org Artificial Intelligence

For the highly imbalanced credit card fraud detection problem, most existing methods either use data augmentation methods or conventional machine learning models, while neural network-based anomaly detection approaches are lacking. Furthermore, few studies have employed AI interpretability tools to investigate the feature importance of transaction data, which is crucial for the black-box fraud detection module. Considering these two points together, we propose a novel anomaly detection framework for credit card fraud detection as well as a model-explaining module responsible for prediction explanations. The fraud detection model is composed of two deep neural networks, which are trained in an unsupervised and adversarial manner. Precisely, the generator is an AutoEncoder aiming to reconstruct genuine transaction data, while the discriminator is a fully-connected network for fraud detection. The explanation module has three white-box explainers in charge of interpretations of the AutoEncoder, discriminator, and the whole detection model, respectively. Experimental results show the state-of-the-art performances of our fraud detection model on the benchmark dataset compared with baselines. In addition, prediction analyses by three explainers are presented, offering a clear perspective on how each feature of an instance of interest contributes to the final model output.


GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks

arXiv.org Machine Learning

Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by using neural networks to pass messages through edges in the graph. However, incorporating both graph structure and feature information leads to complex non-linear models and explaining predictions made by GNNs remains to be a challenging task. Here we propose GnnExplainer, a general model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task (node and graph classification, link prediction). In order to explain a given node's predicted label, GnnExplainer provides a local interpretation by highlighting relevant features as well as an important subgraph structure by identifying the edges that are most relevant to the prediction. Additionally, the model provides single-instance explanations when given a single prediction as well as multi-instance explanations that aim to explain predictions for an entire class of instances/nodes. We formalize GnnExplainer as an optimization task that maximizes the mutual information between the prediction of the full model and the prediction of simplified explainer model. We experiment on synthetic as well as real-world data. On synthetic data we demonstrate that our approach is able to highlight relevant topological structures from noisy graphs. We also demonstrate GnnExplainer to provide a better understanding of pre-trained models on real-world tasks. GnnExplainer provides a variety of benefits, from the identification of semantically relevant structures to explain predictions to providing guidance when debugging faulty graph neural network models.