def
4b121e627d3c5683f312ad168988f3f0-Supplemental-Conference.pdf
A.2 MainProofsketch In this section we will give a theoretical guarantee for the performance of our algorithm. Essentially, it measures the largest total difference of value estimation among all the functions in f Ft for the fixed inputsxt,i wherei [M]. Lemma 2. If (βt 0 | t N) is a nondecreasing sequence and Ft:=n Themainstructure ofthisproof issimilar toproposition 3,section CinEluder dimension's paper, and we will only point out the subtle details that makes the difference. Apart from the notations section 3, we add more symbols for the regret analysis. Next, we will show thatf h is a feasible solution for the optimization ofFt.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Alberta (0.14)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Overview (0.92)
- Research Report > New Finding (0.67)
- Health & Medicine (0.92)
- Government > Regional Government > North America Government > United States Government (0.45)
- Education > Educational Setting (0.45)
Regression under demographic parity constraints via unlabeled post-processing
We address the problem of performing regression while ensuring demographic parity, even without access to sensitive attributes during inference. We present a general-purpose post-processing algorithm that, using accurate estimates of the regression function and a sensitive attribute predictor, generates predictions that meet the demographic parity constraint. Our method involves discretization and stochastic minimization of a smooth convex function.
- Europe > France (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Russia (0.04)
- Europe > Austria > Styria > Graz (0.04)
- (6 more...)
Explanations that reveal all through the definition of encoding
Feature attributions attempt to highlight what inputs drive predictive power. Good attributions or explanations are thus those that produce inputs that retain this predictive power; accordingly, evaluations of explanations score their quality of prediction. However, evaluations produce scores better than what appears possible from the values in the explanation for a class of explanations, called encoding explanations. Probing for encoding remains a challenge because there is no general characterization of what gives the extra predictive power. We develop a definition of encoding that identifies this extra predictive power via conditional dependence and show that the definition fits existing examples of encoding. This definition implies, in contrast to encoding explanations, that non-encoding explanations contain all the informative inputs used to produce the explanation, giving them a "what you see is what you get" property, which makes them transparent and simple to use.
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- (2 more...)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany (0.04)
- Transportation (0.46)
- Information Technology (0.46)
- Health & Medicine (0.45)
- Asia > Middle East > Oman (0.05)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > Russia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)