Auditing Black-Box Models Using Transparent Model Distillation With Side Information
Tan, Sarah, Caruana, Rich, Hooker, Giles, Lou, Yin
Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose a transparent model distillation approach to audit such models. Model distillation was first introduced to transfer knowledge from a large, complex teacher model to a faster, simpler student model without significant loss in prediction accuracy. To this we add a third criterion - transparency. To gain insight into black-box models, we treat them as teachers, training transparent student models to mimic the risk scores assigned by the teacher. Moreover, we use side information in the form of the actual outcomes the teacher scoring model was intended to predict in the first place. By training a second transparent model on the outcomes, we can compare the two models to each other. When comparing models trained on risk scores to models trained on outcomes, we show that it is necessary to calibrate the risk-scoring model's predictions to remove distortion that may have been added to the black-box risk-scoring model during or after its training process. We also show how to compute confidence intervals for the particular class of transparent student models we use - tree-based additive models with pairwise interactions (GA2Ms) - to support comparison of the two transparent models. We demonstrate the methods on four public datasets: COMPAS, Lending Club, Stop-and-Frisk, and Chicago Police.
Feb-24-2018
- Country:
- North America > United States
- Florida (0.14)
- Illinois > Cook County
- Chicago (0.26)
- North America > United States
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (0.47)
- Research Report
- Industry:
- Banking & Finance > Loans (0.90)
- Education (1.00)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology (0.67)
- Law (0.93)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Transportation > Air (1.00)
- Technology: