Goto

Collaborating Authors

 reparation


The Filmmaker Who Says AI Is Reparations

WIRED

Willonius Hatcher was looking for a way in. He'd tried just about everything to break into Hollywood, and because there no longer exists a traditional entry point into its hallowed pantheon of performers--we can thank the internet for doing away with all notions of conventional success--the pursuit of it sometimes felt like a mirage. He could see it, and he knew he could get there because he believed in his talent, only the closer he got the farther the door seemed. He'd done the stand-up circuit, short film work, sketches, even video editing. None of them got him fully in the door.


Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias

Wyllie, Sierra, Shumailov, Ilia, Papernot, Nicolas

arXiv.org Artificial Intelligence

Model-induced distribution shifts (MIDS) occur as previous model outputs pollute new model training sets over generations of models. This is known as model collapse in the case of generative models, and performative prediction or unfairness feedback loops for supervised models. When a model induces a distribution shift, it also encodes its mistakes, biases, and unfairnesses into the ground truth of its data ecosystem. We introduce a framework that allows us to track multiple MIDS over many generations, finding that they can lead to loss in performance, fairness, and minoritized group representation, even in initially unbiased datasets. Despite these negative consequences, we identify how models might be used for positive, intentional, interventions in their data ecosystems, providing redress for historical discrimination through a framework called algorithmic reparation (AR). We simulate AR interventions by curating representative training batches for stochastic gradient descent to demonstrate how AR can improve upon the unfairnesses of models and data ecosystems subject to other MIDS. Our work takes an important step towards identifying, mitigating, and taking accountability for the unfair feedback loops enabled by the idea that ML systems are inherently neutral and objective.


We 'interviewed' Harriet Tubman using AI. It got a little weird.

Washington Post - Technology News

Harriet Tubman didn't give many interviews in her lifetime, and when she did, they were generally conducted by one of her friends, Sarah Hopkins Bradford, a White children's book author in Upstate New York, where Tubman spent the last decades of her life. The result of those interviews were two biographies, published in 1869 and 1886. Though Bradford obviously admired Tubman, the books suffer from her sometimes patronizing attitude toward her subject, her use of racial slurs and her awkward attempts to re-create the speech patterns of a Black woman raised enslaved in Maryland. Some of the long "quotes" from Tubman were completely made up, and it shows. So I was curious to see what would happen recently when I had my own "interview" with Tubman -- using the online educator Khan Academy's new artificial intelligence learning tool Khanmigo, which enables users to have live chats with dozens of simulated historical figures like Abigail Adams, Genghis Khan, Montezuma and Winston Churchill. And if so, would it come off horribly, a 21st-century minstrelsy?


Racism Cannot Be Reduced to Mere Computation

Slate

A historian of technology and race responds to Tochi Onyebuchi's "How to Pay Reparations." Tochi Onyebuchi's "How to Pay Reparations" spoke to me. Its themes rang virtually every note of my twentysomething-year-long career. In 1998, I made my first digital footprint with a signed online petition in support of reparations for the Tulsa race riots. I endured countless run-ins with Oklahoma good ol' boys while crisscrossing the state, working for candidates representing a perpetually losing political party.