Civil Rights & Constitutional Law


What is algorithmic bias?

@machinelearnbot

This article is part of Demystifying AI, a series of posts that (try) to disambiguate the jargon and myths surrounding AI. In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became a racist, misogynist and a holocaust denying AI, debunking--once again--the myth of algorithmic neutrality. For years, we've thought that artificial intelligence doesn't suffer from the prejudices and biases of its human creators because it's driven by pure, hard, mathematical logic.


You weren't supposed to actually implement it, Google

#artificialintelligence

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial".


Artificial Intelligence Has a Racism Issue

#artificialintelligence

It's long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans' emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way into the machines. It turns out that these A.I. and machine-learning tools can have blind spots when it comes to women and minorities. This is especially concerning, considering that many companies, governmental organizations, and even hospitals are using machine learning and other A.I. tools to help with everything from preventing and treating injuries and diseases to predicting creditworthiness for loan applicants.


You weren't supposed to actually implement it, Google

@machinelearnbot

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial."


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

An array of ethnic filters on the photo-editing app, FaceApp, has stirred backlash as users decry the options for facial manipulation as racist. The selfie-editing app, FaceApp, was updated earlier this month with four new filters: Asian, Black, Caucasian and Indian. The filters immediately drew criticism on Twitter by users who made comparisons to blackface and yellowface racial stereotypes. In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. "#FaceApp has a new feature where you can see yourself #CaucasianLiving.


Biased AI Is A Threat To Civil Liberties. The ACLU Has A Plan To Fix It

#artificialintelligence

Earlier this month, the 97-year-old nonprofit advocacy organization launched a partnership with AI Now, a New York-based research initiative that studies the social consequences of artificial intelligence. "We are increasingly aware that AI-related issues impact virtually every civil rights and civil liberties issue that the ACLU works on," Rachel Goodman, a staff attorney in the ACLU's Racial Justice program, tells Co.Design. AI is silently reshaping our entire society: our day-to-day work, the products we purchase, the news we read, how we vote, and how governments govern, for example. But as anyone who's searched endlessly through Netflix without finding anything to watch can attest, AI isn't perfect. But while it's easy to pause a movie when Netflix's algorithm misjudges your tastes, the stakes are much higher when it comes to the algorithms that are used to decide more serious issues, like prison sentences, credit scores, or housing.


Microsoft's Zo chatbot told a user that 'Quran is very violent'

#artificialintelligence

Microsoft's earlier chatbot Tay had faced some problems as the bot picking up the worst of humanity, and spouted racists, sexist comments on Twitter when it was introduced last year. Now it looks like Microsoft's latest bot called'Zo' has caused similar trouble, though not quite the scandal that Tay caused on Twitter. According to a BuzzFeed News report, 'Zo', which is part of the Kik messenger, told their reporter the'Quran' was very violent, and this was in response to a question around healthcare. The report also highlights how Zo had an opinion about the Osama Bin Laden capture, and said this was the result of the'intelligence' gathering by one administration for years. While Microsoft has admitted the errors in Zo's behaviour and said they have been fixed.


Sorry, Dave, I can't code that: AI's prejudice problem

#artificialintelligence

Bureaucrats don't just come in uniforms and peaked caps. They come in 1U racks, too. They'll deny you credit, charge you more for paperclips that someone in another neighbourhood, and maybe even stop you getting out of jail. Algorithms are calling the shots these days, and they may not be as impartial as you thought. To some, algorithmic bias is a growing problem.


Future of Humanity Institute

#artificialintelligence

The Future of Humanity Institute (FHI) will be joining the Partnership on AI, a non-profit organisation founded by Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft, with the goal of formulating best practices for socially beneficial AI development. We will be joining the Partnership alongside technology firms like Sony as well as third sector groups like Human Rights Watch, UNICEF, and our partners in Cambridge, the Leverhulme Centre for the Future of Intelligence. The Partnership on AI is organised around a set of thematic pillars, including Fair, transparent, and accountable AI, and AI and social good; FHI is will focus its work on the first of these pillars: Safety-critical AI. Where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions. Professor Nick Bostrom, director of FHI, said in response to the news, "We're delighted to be joining the Partnership on AI, and to be expanding our industry and nonprofit collaborations on AI safety."