Goto

Collaborating Authors

 Near, Joseph P.


Differentially Private Learning Needs Better Model Initialization and Self-Distillation

arXiv.org Artificial Intelligence

DPSGD to fine-tune these models on private data often yields poor results, particularly when the private Differentially private SGD (DPSGD) enables dataset is small (Tramรจr et al., 2022; Mireshghallah privacy-preserving training of language models, et al., 2021). Recent work has shown that leveraging but often reduces utility, diversity, and linguistic better hand-crafted features (Tramer and Boneh, 2020) quality. We introduce DPRefine, a threephase or features from large pre-trained language models (Li method that initializes a model using et al., 2022, 2021) can improve the privacy-utility tradeoff data synthesis from a small pre-trained LM in differentially private learning. However, these with rigorous filtering, applies DP finetuning approaches have limitations: smaller pre-trained models on private data, and performs self-distillation offer limited benefits, and fine-tuning larger models on to refine outputs. This approach significantly private data may be infeasible due to proprietary concerns outperforms vanilla DPSGD, with AlpacaEval or infrastructure limitations. This raises a critical preferring DPRefine's generations in 78.4% question: Can we develop small, domain-specific language of cases across all datasets. Our analysis reveals models that achieve high performance without that DPRefine reduces linguistic errors in requiring large private datasets or large, pre-trained generated text by 84.0%, mitigating grammar models?


Prediction Sensitivity: Continual Audit of Counterfactual Fairness in Deployed Classifiers

arXiv.org Artificial Intelligence

As AI-based systems increasingly impact many areas of our lives, auditing these systems for fairness is an increasingly high-stakes problem. Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment. Counterfactual fairness describes an individualized notion of fairness but is even more challenging to evaluate after deployment. We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers. Prediction sensitivity helps answer the question: would this prediction have been different, if this individual had belonged to a different demographic group -- for every prediction made by the deployed model. Prediction sensitivity can leverage correlations between protected status and other features and does not require protected status information at prediction time. Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.


Towards Auditability for Fairness in Deep Learning

arXiv.org Artificial Intelligence

Group fairness metrics can detect when a deep learning model behaves differently for advantaged and disadvantaged groups, but even models that score well on these metrics can make blatantly unfair predictions. We present smooth prediction sensitivity, an efficiently computed measure of individual fairness for deep learning models that is inspired by ideas from interpretability in deep learning. smooth prediction sensitivity allows individual predictions to be audited for fairness. We present preliminary experimental results suggesting that smooth prediction sensitivity can help distinguish between fair and unfair predictions, and that it may be helpful in detecting blatantly unfair predictions from "group-fair" models.


Towards a Measure of Individual Fairness for Deep Learning

arXiv.org Artificial Intelligence

Deep learning has produced big advances in artificial intelligence, but trained neural networks often reflect and amplify bias in their training data, and thus produce unfair predictions. We propose a novel measure of individual fairness, called prediction sensitivity, that approximates the extent to which a particular prediction is dependent on a protected attribute. We show how to compute prediction sensitivity using standard automatic differentiation capabilities present in modern deep learning frameworks, and present preliminary empirical results suggesting that prediction sensitivity may be effective for measuring bias in individual predictions.