Goto

Collaborating Authors

 fermi


How Millie Dresselhaus paid it forward

MIT Technology Review

Encouraged early on by Nobel laureate Enrico Fermi, the "Queen of Carbon" laid the foundation for countless advances in nanotechnology--and mentored countless young scientists along the way. At MIT, Mildred Dresselhaus became a beloved professor who pushed her students to be their very best and provided support in ways big and small. Institute Professor Mildred "Millie" Dresselhaus forever altered our understanding of matter--the physical stuff of the universe that has mass and takes up space. Over 57 years at MIT, Dresselhaus also played a significant role in inspiring people to use this new knowledge to tackle some of the world's greatest challenges, from producing clean energy to curing cancer. Although she became an emerita professor in 2007, Dresselhaus, who taught electrical engineering and physics, remained actively involved in research and all other aspects of MIT life until her death in 2017. She would have been 95 this November.


Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework

Baharlouei, Sina, Razaviyayn, Meisam

arXiv.org Machine Learning

While training fair machine learning models has been studied extensively in recent years, most developed methods rely on the assumption that the training and test data have similar distributions. In the presence of distribution shifts, fair models may behave unfairly on test data. There have been some developments for fair learning robust to distribution shifts to address this shortcoming. However, most proposed solutions are based on the assumption of having access to the causal graph describing the interaction of different features. Moreover, existing algorithms require full access to data and cannot be used when small batches are used (stochastic/batch implementation). This paper proposes the first stochastic distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph. More specifically, we formulate the fair inference in the presence of the distribution shift as a distributionally robust optimization problem under $L_p$ norm uncertainty sets with respect to the Exponential Renyi Mutual Information (ERMI) as the measure of fairness violation. We then discuss how the proposed method can be implemented in a stochastic fashion. We have evaluated the presented framework's performance and efficiency through extensive experiments on real datasets consisting of distribution shifts.


A Stochastic Optimization Framework for Fair Risk Minimization

Lowy, Andrew, Baharlouei, Sina, Pavan, Rakesh, Razaviyayn, Meisam, Beirami, Ahmad

arXiv.org Artificial Intelligence

Despite the success of large-scale empirical risk minimization (ERM) at achieving high accuracy across a variety of machine learning tasks, fair ERM is hindered by the incompatibility of fairness constraints with stochastic optimization. We consider the problem of fair classification with discrete sensitive attributes and potentially large models and data sets, requiring stochastic solvers. Existing in-processing fairness algorithms are either impractical in the large-scale setting because they require large batches of data at each iteration or they are not guaranteed to converge. In this paper, we develop the first stochastic in-processing fairness algorithm with guaranteed convergence. For demographic parity, equalized odds, and equal opportunity notions of fairness, we provide slight variations of our algorithm--called FERMI--and prove that each of these variations converges in stochastic optimization with any batch size. Empirically, we show that FERMI is amenable to stochastic solvers with multiple (non-binary) sensitive attributes and non-binary targets, performing well even with minibatch size as small as one. Extensive experiments show that FERMI achieves the most favorable tradeoffs between fairness violation and test accuracy across all tested setups compared with state-of-the-art baselines for demographic parity, equalized odds, equal opportunity. These benefits are especially significant with small batch sizes and for non-binary classification with large number of sensitive attributes, making FERMI a practical, scalable fairness algorithm. The code for all of the experiments in this paper is available at: https://github.com/optimization-for-data-driven-science/FERMI.