Goto

Collaborating Authors

 def



4b121e627d3c5683f312ad168988f3f0-Supplemental-Conference.pdf

Neural Information Processing Systems

A.2 MainProofsketch In this section we will give a theoretical guarantee for the performance of our algorithm. Essentially, it measures the largest total difference of value estimation among all the functions in f Ft for the fixed inputsxt,i wherei [M]. Lemma 2. If (βt 0 | t N) is a nondecreasing sequence and Ft:=n Themainstructure ofthisproof issimilar toproposition 3,section CinEluder dimension's paper, and we will only point out the subtle details that makes the difference. Apart from the notations section 3, we add more symbols for the regret analysis. Next, we will show thatf h is a feasible solution for the optimization ofFt.



Regression under demographic parity constraints via unlabeled post-processing

Neural Information Processing Systems

We address the problem of performing regression while ensuring demographic parity, even without access to sensitive attributes during inference. We present a general-purpose post-processing algorithm that, using accurate estimates of the regression function and a sensitive attribute predictor, generates predictions that meet the demographic parity constraint. Our method involves discretization and stochastic minimization of a smooth convex function.





Explanations that reveal all through the definition of encoding

Neural Information Processing Systems

Feature attributions attempt to highlight what inputs drive predictive power. Good attributions or explanations are thus those that produce inputs that retain this predictive power; accordingly, evaluations of explanations score their quality of prediction. However, evaluations produce scores better than what appears possible from the values in the explanation for a class of explanations, called encoding explanations. Probing for encoding remains a challenge because there is no general characterization of what gives the extra predictive power. We develop a definition of encoding that identifies this extra predictive power via conditional dependence and show that the definition fits existing examples of encoding. This definition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a "what you see is what you get" property, which makes them transparent and simple to use.