Rothblum, Guy
Models That Prove Their Own Correctness
Amit, Noga, Goldwasser, Shafi, Paradise, Orr, Rothblum, Guy
How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured *on average* over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train *Self-Proving models* that prove the correctness of their output to a verification algorithm $V$ via an Interactive Proof. Self-Proving models satisfy that, with high probability over a random input, the model generates a correct output *and* successfully proves its correctness to $V\!$. The *soundness* property of $V$ guarantees that, for *every* input, no model can convince $V$ of the correctness of an incorrect output. Thus, a Self-Proving model proves correctness of most of its outputs, while *all* incorrect outputs (of any model) are detected by $V$. We devise a generic method for learning Self-Proving models, and we prove convergence bounds under certain assumptions. The theoretical framework and results are complemented by experiments on an arithmetic capability: computing the greatest common divisor (GCD) of two integers. Our learning method is used to train a Self-Proving transformer that computes the GCD *and* proves the correctness of its answer.
Samplable Anonymous Aggregation for Private Federated Data Analysis
Talwar, Kunal, Wang, Shan, McMillan, Audra, Jina, Vojta, Feldman, Vitaly, Basile, Bailey, Cahill, Aine, Chan, Yi Sheng, Chatzidakis, Mike, Chen, Junye, Chick, Oliver, Chitnis, Mona, Ganta, Suman, Goren, Yusuf, Granqvist, Filip, Guo, Kristine, Jacobs, Frederic, Javidbakht, Omid, Liu, Albert, Low, Richard, Mascenik, Dan, Myers, Steve, Park, David, Park, Wonhee, Parsa, Gianni, Pauly, Tommy, Priebe, Christian, Rishi, Rehan, Rothblum, Guy, Scaria, Michael, Song, Linmao, Song, Congzheng, Tarbe, Karl, Vogt, Sebastian, Winstrom, Luke, Zhou, Shundong
Learning aggregate population trends can allow for better data-driven decisions, and application of machine learning can improve user experience. Compared to learning from public curated datasets, learning from a larger population offers several benefits. As an example, a next-word prediction model trained on words typed by users (a) can better fit the actual distribution of language used on devices, (b) can adapt faster to shifts in distribution, and (c) can more faithfully represent smaller sub-populations that may not be well-represented in curated datasets. At the same time, training such models may involve sensitive user data.
Fairness Through Computationally-Bounded Awareness
Kim, Michael, Reingold, Omer, Rothblum, Guy
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike earlier work, we do not assume that the entire metric is known to the learning algorithm; instead, the learner can query this *arbitrary* metric a bounded number of times. We propose a new notion of fairness called *metric multifairness* and show how to achieve this notion in our setting. Metric multifairness is parameterized by a similarity metric d on pairs of individuals to classify and a rich collection C of (possibly overlapping) "comparison sets" over pairs of individuals. At a high level, metric multifairness guarantees that *similar subpopulations are treated similarly*, as long as these subpopulations are identified within the class C.
Fairness Through Computationally-Bounded Awareness
Kim, Michael, Reingold, Omer, Rothblum, Guy
We study the problem of fair classification within the versatile framework of Dwork et al. [ITCS '12], which assumes the existence of a metric that measures similarity between pairs of individuals. Unlike earlier work, we do not assume that the entire metric is known to the learning algorithm; instead, the learner can query this *arbitrary* metric a bounded number of times. We propose a new notion of fairness called *metric multifairness* and show how to achieve this notion in our setting. Metric multifairness is parameterized by a similarity metric d on pairs of individuals to classify and a rich collection C of (possibly overlapping) "comparison sets" over pairs of individuals. At a high level, metric multifairness guarantees that *similar subpopulations are treated similarly*, as long as these subpopulations are identified within the class C.