Goto

Collaborating Authors

 affirmative action


DEI Died This Year. Maybe It Was Supposed To

WIRED

My position feels more precarious than ever. It's a question that I sometimes toss out in the company of friends who--like me, and maybe like you--have a complicated relationship to their job. I've worked at WIRED as a writer for eight years, and with much success. Eight years is also an eternity in news media, and especially if you are Black. All industries suffer from unique growing pains. Ours just so happens to have laughably high turnover rates, a distaste for racial and gender diversity, and the dubious distinction of being perpetually on the verge of extinction. So on nights when friends and I gather, trading war stories of workplace microaggressions and corporate mismanagement under damp bar lighting, we wonder how we've lasted as long as we have. The only reason I've survived, I joke, is because I'm Black. It's a silly thing to say, particularly because I have no actual proof of it other than the occasional feeling. What I do know is that I've been The Only One in more spaces than I care to remember, and rarely by choice.


How the Supreme Court Defines Liberty

The New Yorker

Recent memoirs by the Justices reveal how a new vision of restraint has led to radical outcomes. To understand how grudging Amy Coney Barrett's new book is when it comes to revealing personal details, consider that one of the family members the Supreme Court Justice most often refers to is a great-grandmother who died five years before she was born. On Barrett's desk at home, she recounts in " Listening to the Law," she keeps a photograph of her great-grandmother's one-story house, where, as a widow during the Great Depression, she raised some of her thirteen children and took in other needy relatives. "Looking at the photo reminds me of a woman who stretched herself beyond all reasonable capacity," Barrett explains. "I'm not sure that I'll be able to manage my life with the same grace that she had. But she motivates me to keep trying." For Barrett, the mother of seven children, that effort entails setting her alarm for 5 "Our kids get up at six thirty during the school year, so I start early if I want to accomplish anything on my own to-do list," she writes. This is what passes for disclosure from Barrett; she measures out the details of her life with coffee spoons, careful not to spill.


Laypeople's Attitudes Towards Fair, Affirmative, and Discriminatory Decision-Making Algorithms

Lima, Gabriel, Grgić-Hlača, Nina, Langer, Markus, Zou, Yixin

arXiv.org Artificial Intelligence

Affirmative algorithms have emerged as a potential answer to algorithmic discrimination, seeking to redress past harms and rectify the source of historical injustices. We present the results of two experiments ($N$$=$$1193$) capturing laypeople's perceptions of affirmative algorithms -- those which explicitly prioritize the historically marginalized -- in hiring and criminal justice. We contrast these opinions about affirmative algorithms with folk attitudes towards algorithms that prioritize the privileged (i.e., discriminatory) and systems that make decisions independently of demographic groups (i.e., fair). We find that people -- regardless of their political leaning and identity -- view fair algorithms favorably and denounce discriminatory systems. In contrast, we identify disagreements concerning affirmative algorithms: liberals and racial minorities rate affirmative systems as positively as their fair counterparts, whereas conservatives and those from the dominant racial group evaluate affirmative algorithms as negatively as discriminatory systems. We identify a source of these divisions: people have varying beliefs about who (if anyone) is marginalized, shaping their views of affirmative algorithms. We discuss the possibility of bridging these disagreements to bring people together towards affirmative algorithms.


A Systems Thinking Approach to Algorithmic Fairness

Lam, Chris

arXiv.org Artificial Intelligence

Systems thinking provides us with a way to model the algorithmic fairness problem by allowing us to encode prior knowledge and assumptions about where we believe bias might exist in the data generating process. We can then model this using a series of causal graphs, enabling us to link AI/ML systems to politics and the law. By treating the fairness problem as a complex system, we can combine techniques from machine learning, causal inference, and system dynamics. Each of these analytical techniques is designed to capture different emergent aspects of fairness, allowing us to develop a deeper and more holistic view of the problem. This can help policymakers on both sides of the political aisle to understand the complex trade-offs that exist from different types of fairness policies, providing a blueprint for designing AI policy that is aligned to their political agendas.


"Patriarchy Hurts Men Too." Does Your Model Agree? A Discussion on Fairness Assumptions

Favier, Marco, Calders, Toon

arXiv.org Artificial Intelligence

The pipeline of a fair ML practitioner is generally divided into three phases: 1) Selecting a fairness measure. 2) Choosing a model that minimizes this measure. 3) Maximizing the model's performance on the data. In the context of group fairness, this approach often obscures implicit assumptions about how bias is introduced into the data. For instance, in binary classification, it is often assumed that the best model, with equal fairness, is the one with better performance. However, this belief already imposes specific properties on the process that introduced bias. More precisely, we are already assuming that the biasing process is a monotonic function of the fair scores, dependent solely on the sensitive attribute. We formally prove this claim regarding several implicit fairness assumptions. This leads, in our view, to two possible conclusions: either the behavior of the biasing process is more complex than mere monotonicity, which means we need to identify and reject our implicit assumptions in order to develop models capable of tackling more complex situations; or the bias introduced in the data behaves predictably, implying that many of the developed models are superfluous.


Interventions Against Machine-Assisted Statistical Discrimination

Zhu, John Y.

arXiv.org Artificial Intelligence

This article studies how to intervene against statistical discrimination, when it is based on beliefs generated by machine learning, rather than by humans. Unlike beliefs formed by a human mind, machine learning-generated beliefs are verifiable. This allows interventions to move beyond simple, belief-free designs like affirmative action, to more sophisticated ones, that constrain decision makers in ways that depend on what they are thinking. Such mind reading interventions can perform well where affirmative action does not, even when the beliefs being conditioned on are possibly incorrect and biased.


Understanding Divergent Framing of the Supreme Court Controversies: Social Media vs. News Outlets

Pan, Jinsheng, Wang, Zichen, Qi, Weihong, Lyu, Hanjia, Luo, Jiebo

arXiv.org Artificial Intelligence

Understanding the framing of political issues is of paramount importance as it significantly shapes how individuals perceive, interpret, and engage with these matters. While prior research has independently explored framing within news media and by social media users, there remains a notable gap in our comprehension of the disparities in framing political issues between these two distinct groups. To address this gap, we conduct a comprehensive investigation, focusing on the nuanced distinctions both qualitatively and quantitatively in the framing of social media and traditional media outlets concerning a series of American Supreme Court rulings on affirmative action, student loans, and abortion rights. Our findings reveal that, while some overlap in framing exists between social media and traditional media outlets, substantial differences emerge both across various topics and within specific framing categories. Compared to traditional news media, social media platforms tend to present more polarized stances across all framing categories. Further, we observe significant polarization in the news media's treatment (i.e., Left vs. Right leaning media) of affirmative action and abortion rights, whereas the topic of student loans tends to exhibit a greater degree of consensus. The disparities in framing between traditional and social media platforms carry significant implications for the formation of public opinion, policy decision-making, and the broader political landscape.


Supreme Court struck down affirmative action, but that won't stop Harvard

FOX News

You probably think the Supreme Court just ended racial discrimination in university admissions, euphemistically called affirmative action, and a new day of equal treatment without regard to race or skin color has dawned. Yes, SCOTUS invalidated the race-conscious practices of Harvard and UNC, holding that under the 14th Amendment a "student must be treated based on his or her experiences as an individual – not on the basis of race." That is a very important statement of our guiding constitutional principles. Yet already schools like Harvard are suggesting they will skirt the ruling by considering applicants' experience with race as opposed to the applicants' race itself. These games are not surprising and have been in the works for months.


The Supreme Court Killed the College-Admissions Essay

The Atlantic - Technology

Nestled within yesterday's Supreme Court decision declaring that race-conscious admissions programs, like those at Harvard and the University of North Carolina, are unconstitutional is a crucial carveout: Colleges are free to consider "an applicant's discussion of how race affected his or her life." In other words, they can weigh a candidate's race when it is mentioned in an admissions essay. Observers had already speculated about personal essays becoming invaluable tools for candidates who want to express their racial background without checking a box--now it is clear that the end of affirmative action will transform not only how colleges select students, but also how teenagers advertise themselves to colleges. For essays and statements to provide a workaround for pursuing diversity, applicants must first cast themselves as diverse. The American Council on Education, a nonprofit focused on the impacts of public policy on higher education, recently convened a panel dedicated to planning for the demise of affirmative action; admissions directors and consultants emphasized the need "to educate students about how to write about who they are in a very different way," expressing their "full authentic story" and "trials and tribulations."


Mitigating Discrimination in Insurance with Wasserstein Barycenters

Charpentier, Arthur, Hu, François, Ratz, Philipp

arXiv.org Artificial Intelligence

The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.