Goto

Collaborating Authors

Can Auditing Eliminate Bias from Algorithms? – The Markup

#artificialintelligence

For more than a decade, journalists and researchers have been writing about the dangers of relying on algorithms to make weighty decisions: who gets locked up, who gets a job, who gets a loan--even who has priority for COVID-19 vaccines. Rather than remove bias, one algorithm after another has codified and perpetuated it, as companies have simultaneously continued to more or less shield their algorithms from public scrutiny. The big question ever since: How do we solve this problem? Lawmakers and researchers have advocated for algorithmic audits, which would dissect and stress-test algorithms to see how they work and whether they're performing their stated goals or producing biased outcomes. And there is a growing field of private auditing firms that purport to do just that.


The Algorithmic Auditing Trap

#artificialintelligence

This op-ed was written by Mona Sloane, a sociologist and senior research scientist at the NYU Center for Responsible A.I. and a fellow at the NYU Institute for Public Knowledge. Her work focuses on design and inequality in the context of algorithms and artificial intelligence. We have a new A.I. race on our hands: the race to define and steer what it means to audit algorithms. Governing bodies know that they must come up with solutions to the disproportionate harm algorithms can inflict. This technology has disproportionate impacts on racial minorities, the economically disadvantaged, womxn, and people with disabilities, with applications ranging from health care to welfare, hiring, and education.


Enhancing trust in artificial intelligence: Audits and explanations can help

#artificialintelligence

There is a lively debate all over the world regarding AI's perceived "black box" problem. Most profoundly, if a machine can be taught to learn itself, how does it explain its conclusions? This issue comes up most frequently in the context of how to address possible algorithmic bias. One way to address this issue is to mandate a right to a human decision per the General Data Protection Regulation's (GDPR) Article 22. Here in the United States, Senators Wyden and Booker propose in the Algorithmic Accountability Act that companies be compelled to conduct impact assessments.



We Need Bug Bounties for Bad Algorithms

#artificialintelligence

Amit Elazari Bar On is a doctoral law candidate (J.S.D.) at UC Berkeley School of Law and a CLTC (Center for Long-Term Cybersecurity) Grantee, Berkeley School of Information, as well as a member of AFOG, Algorithmic Fairness and Opacity Working Group at Berkeley. On 2017, Amit was a CTSP Fellow. We are told opaque algorithms and black-boxes are going to control our world, shaping every aspect of our life. They warn us that without accountability and transparency, and generally without better laws, humanity is doomed to a future of machine-generated bias and deception. From calls to open-the-black box to the limitations of explanations of inscrutable machine-learning models, the regulation of algorithms is one of the most pressing policy concerns in today's digital society.