Civil Rights & Constitutional Law


Some Amazon investors side with ACLU on facial recognition

Washington Post

Some Amazon company investors said Monday they are siding with privacy and civil rights advocates who are urging the tech giant to not sell a powerful face recognition tool to police. The American Civil Liberties Union is leading the effort against Amazon's Rekognition product, delivering a petition with 152,000 signatures to the company's Seattle headquarters Monday, telling the company to "cancel this order." They're asking Amazon to stop marketing Rekognition to government agencies over privacy issues that they say can be used to discriminate against minorities. Amazon said it's an object detection tool. The company through a spokesman said it can be used for law enforcement tasks ranging from fighting human trafficking to finding lost children, and that just like computers, it can be a force for good in responsible hands.


What is algorithmic bias?

@machinelearnbot

This article is part of Demystifying AI, a series of posts that (try) to disambiguate the jargon and myths surrounding AI. In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became a racist, misogynist and a holocaust denying AI, debunking--once again--the myth of algorithmic neutrality. For years, we've thought that artificial intelligence doesn't suffer from the prejudices and biases of its human creators because it's driven by pure, hard, mathematical logic.


Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs

#artificialintelligence

Joy Buolamwini was conducting research at MIT on how computers recognized people's faces, when she started experiencing something weird. Whenever she sat before a system's front-facing camera, it wouldn't recognize her face, even after working for her lighter-skinned friends. But when she put on a simple white mask, the face-tracking animation suddenly lit up the screen. Suspecting a more widespread problem, she carried out a study on the AI-powered facial recognition systems of Microsoft, IBM and Face, a Chinese startup that has raised more than $500 million from investors. Buolamwini showed the systems 1,000 faces, and told them to identify each as male or female.


Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs

#artificialintelligence

Joy Buolamwini was conducting research at MIT on how computers recognized people's faces, when she started experiencing something weird. Whenever she sat before a system's front-facing camera, it wouldn't recognize her face, even after working for her lighter-skinned friends. But when she put on a simple white mask, the face-tracking animation suddenly lit up the screen. Suspecting a more widespread problem, she carried out a study on the AI-powered facial recognition systems of Microsoft, IBM and Face, a Chinese startup that has raised more than $500 million from investors. Buolamwini showed the systems 1,000 faces, and told them to identify each as male or female.


You weren't supposed to actually implement it, Google

#artificialintelligence

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial".


Artificial Intelligence Has a Racism Issue

#artificialintelligence

It's long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans' emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way into the machines. It turns out that these A.I. and machine-learning tools can have blind spots when it comes to women and minorities. This is especially concerning, considering that many companies, governmental organizations, and even hospitals are using machine learning and other A.I. tools to help with everything from preventing and treating injuries and diseases to predicting creditworthiness for loan applicants.


You weren't supposed to actually implement it, Google

@machinelearnbot

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial."


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

An array of ethnic filters on the photo-editing app, FaceApp, has stirred backlash as users decry the options for facial manipulation as racist. The selfie-editing app, FaceApp, was updated earlier this month with four new filters: Asian, Black, Caucasian and Indian. The filters immediately drew criticism on Twitter by users who made comparisons to blackface and yellowface racial stereotypes. In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. "#FaceApp has a new feature where you can see yourself #CaucasianLiving.


Biased AI Is A Threat To Civil Liberties. The ACLU Has A Plan To Fix It

#artificialintelligence

Earlier this month, the 97-year-old nonprofit advocacy organization launched a partnership with AI Now, a New York-based research initiative that studies the social consequences of artificial intelligence. "We are increasingly aware that AI-related issues impact virtually every civil rights and civil liberties issue that the ACLU works on," Rachel Goodman, a staff attorney in the ACLU's Racial Justice program, tells Co.Design. AI is silently reshaping our entire society: our day-to-day work, the products we purchase, the news we read, how we vote, and how governments govern, for example. But as anyone who's searched endlessly through Netflix without finding anything to watch can attest, AI isn't perfect. But while it's easy to pause a movie when Netflix's algorithm misjudges your tastes, the stakes are much higher when it comes to the algorithms that are used to decide more serious issues, like prison sentences, credit scores, or housing.