Goto

Collaborating Authors

 argmin



ddd808772c035aed516d42ad3559be5f-Supplemental.pdf

Neural Information Processing Systems

We study the problem of learning an optimal regression function subject to a fairness constraint. It requires that, conditionally on the sensitive feature, the distribution of the function output remains the same. This constraint naturally extends the notion of demographic parity, often used in classification, to the regression setting. We tackle this problem by leveraging on a proxy-discretized version, for which we derive an explicit expression of the optimal fair predictor. This result naturally suggests a two stage approach, in which we first estimate the (unconstrained) regression function from a set of labeled data and then we recalibrate it with another set of unlabeled data.


QuantifyingandImprovingTransferabilityin DomainGeneralization

Neural Information Processing Systems

Based oninvariant features, a high-performing classifier on source domains could hopefully behave equally well on a target domain. In other words, we hope the invariant features to be transferable. However, in practice, there are no perfectly transferable features, andsomealgorithmsseemtolearn"moretransferable"featuresthanothers.


estimated bythenormalized sum Pn i=1wig(Xi) / Pn i=1wi,wherewi =f(Xi)/qi 1(Xi)are

Neural Information Processing Systems

A key object in sequential simulation is the sequence of distributions, called the policy, fromwhich togenerate therandom variables, called particles, usedtoapproximate theintegralsof interest.


Onrankingviasortingbyestimatedexpectedutility

Neural Information Processing Systems

Since utilities can serveas target values to learn the scoring function through square loss regression, the optimality ofsorting byexpected utilities isequivalent tothe consistencyofregression.



Checklist 1. For all authors (a)

Neural Information Processing Systems

Do the main claims made in the abstract and introduction accurately reflect the paper's Did you discuss any potential negative societal impacts of your work? Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Y es] See Appendix Did you specify all the training details (e.g., data splits, hyperparameters, how they Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? Did you include the total amount of compute and the type of resources used (e.g., type Did you include any new assets either in the supplemental material or as a URL? [Y es] Did you discuss whether and how consent was obtained from people whose data you're Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? If you used crowdsourcing or conducted research with human subjects... (a) Proposition 4. F orΓ H and domains S, T we have: T This is a mild assumption that can hold in practice. We prove the first inequality for example and others follow similarly.